Test Report: Hyper-V_Windows 21341

                    
                      890003c5847d742050af13aa4e3a32f9efad98ac:2025-09-04:41269
                    
                

Test fail (10/212)

x
+
TestErrorSpam/setup (193.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-921000 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-921000 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 --driver=hyperv: (3m13.1131577s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-921000] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=21341
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-921000" primary control-plane node in "nospam-921000" cluster
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-921000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (193.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228500 service --namespace=default --https --url hello-node: exit status 1 (15.0104422s)
functional_test.go:1521: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-228500 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228500 service hello-node --url --format={{.IP}}: exit status 1 (15.0162408s)
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-228500 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1558: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228500 service hello-node --url: exit status 1 (15.038043s)
functional_test.go:1571: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-228500 service hello-node --url": exit status 1
functional_test.go:1575: found endpoint for hello-node: 
functional_test.go:1583: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (68.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- sh -c "ping -c 1 172.25.112.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- sh -c "ping -c 1 172.25.112.1": exit status 1 (10.5035037s)

                                                
                                                
-- stdout --
	PING 172.25.112.1 (172.25.112.1): 56 data bytes
	
	--- 172.25.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.112.1) from pod (busybox-7b57f96db7-5cfq2): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- sh -c "ping -c 1 172.25.112.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- sh -c "ping -c 1 172.25.112.1": exit status 1 (10.5029093s)

                                                
                                                
-- stdout --
	PING 172.25.112.1 (172.25.112.1): 56 data bytes
	
	--- 172.25.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.112.1) from pod (busybox-7b57f96db7-c6z29): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- sh -c "ping -c 1 172.25.112.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- sh -c "ping -c 1 172.25.112.1": exit status 1 (10.5435184s)

                                                
                                                
-- stdout --
	PING 172.25.112.1 (172.25.112.1): 56 data bytes
	
	--- 172.25.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.112.1) from pod (busybox-7b57f96db7-lxhhz): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-270000 -n ha-270000
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-270000 -n ha-270000: (12.2392685s)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 logs -n 25: (8.5783226s)
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-228500 image ls --format table --alsologtostderr                                                               │ functional-228500 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 22:51 UTC │ 03 Sep 25 22:51 UTC │
	│ image   │ functional-228500 image build -t localhost/my-image:functional-228500 testdata\build --alsologtostderr                    │ functional-228500 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 22:51 UTC │ 03 Sep 25 22:52 UTC │
	│ image   │ functional-228500 image ls                                                                                                │ functional-228500 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 22:52 UTC │ 03 Sep 25 22:52 UTC │
	│ delete  │ -p functional-228500                                                                                                      │ functional-228500 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 22:55 UTC │ 03 Sep 25 22:56 UTC │
	│ start   │ ha-270000 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=hyperv                                     │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 22:56 UTC │ 03 Sep 25 23:08 UTC │
	│ kubectl │ ha-270000 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                          │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- rollout status deployment/busybox                                                                    │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- nslookup kubernetes.io                                              │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- nslookup kubernetes.io                                              │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- nslookup kubernetes.io                                              │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- nslookup kubernetes.default                                         │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- nslookup kubernetes.default                                         │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- nslookup kubernetes.default                                         │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- nslookup kubernetes.default.svc.cluster.local                       │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- sh -c ping -c 1 172.25.112.1                                        │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │                     │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- sh -c ping -c 1 172.25.112.1                                        │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │                     │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ kubectl │ ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- sh -c ping -c 1 172.25.112.1                                        │ ha-270000         │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 22:56:40
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 22:56:40.560554    1228 out.go:360] Setting OutFile to fd 1384 ...
	I0903 22:56:40.633073    1228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:56:40.633073    1228 out.go:374] Setting ErrFile to fd 1116...
	I0903 22:56:40.633073    1228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:56:40.652287    1228 out.go:368] Setting JSON to false
	I0903 22:56:40.655755    1228 start.go:130] hostinfo: {"hostname":"minikube6","uptime":23305,"bootTime":1756916894,"procs":177,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0903 22:56:40.655847    1228 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0903 22:56:40.659982    1228 out.go:179] * [ha-270000] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0903 22:56:40.668009    1228 notify.go:220] Checking for updates...
	I0903 22:56:40.670114    1228 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0903 22:56:40.673786    1228 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 22:56:40.676878    1228 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0903 22:56:40.680635    1228 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 22:56:40.683147    1228 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 22:56:40.689512    1228 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 22:56:45.855224    1228 out.go:179] * Using the hyperv driver based on user configuration
	I0903 22:56:45.859601    1228 start.go:304] selected driver: hyperv
	I0903 22:56:45.859601    1228 start.go:918] validating driver "hyperv" against <nil>
	I0903 22:56:45.859601    1228 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 22:56:45.906781    1228 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 22:56:45.908743    1228 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 22:56:45.908743    1228 cni.go:84] Creating CNI manager for ""
	I0903 22:56:45.908743    1228 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0903 22:56:45.908743    1228 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0903 22:56:45.908743    1228 start.go:348] cluster config:
	{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s}
	I0903 22:56:45.908743    1228 iso.go:125] acquiring lock: {Name:mk966bde02eeea119c68f0830e579f0a83ec9e11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 22:56:45.912958    1228 out.go:179] * Starting "ha-270000" primary control-plane node in "ha-270000" cluster
	I0903 22:56:45.916152    1228 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 22:56:45.916152    1228 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0903 22:56:45.917130    1228 cache.go:58] Caching tarball of preloaded images
	I0903 22:56:45.917258    1228 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0903 22:56:45.917258    1228 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0903 22:56:45.917747    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 22:56:45.918325    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json: {Name:mk66003acb5cfca8863a58eed44798c01e27bcf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:56:45.918495    1228 start.go:360] acquireMachinesLock for ha-270000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 22:56:45.919503    1228 start.go:364] duration metric: took 1.008ms to acquireMachinesLock for "ha-270000"
	I0903 22:56:45.919689    1228 start.go:93] Provisioning new machine with config: &{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 22:56:45.919689    1228 start.go:125] createHost starting for "" (driver="hyperv")
	I0903 22:56:45.925004    1228 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 22:56:45.925733    1228 start.go:159] libmachine.API.Create for "ha-270000" (driver="hyperv")
	I0903 22:56:45.925733    1228 client.go:168] LocalClient.Create starting
	I0903 22:56:45.926025    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0903 22:56:45.926633    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 22:56:45.926673    1228 main.go:141] libmachine: Parsing certificate...
	I0903 22:56:45.927006    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0903 22:56:45.927305    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 22:56:45.927305    1228 main.go:141] libmachine: Parsing certificate...
	I0903 22:56:45.927484    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0903 22:56:48.016664    1228 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0903 22:56:48.016864    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:56:48.016999    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0903 22:56:49.751836    1228 main.go:141] libmachine: [stdout =====>] : False
	
	I0903 22:56:49.751836    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:56:49.751836    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 22:56:51.299486    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 22:56:51.299562    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:56:51.299562    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 22:56:54.893059    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 22:56:54.893059    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:56:54.895436    1228 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 22:56:55.547200    1228 main.go:141] libmachine: Creating SSH key...
	I0903 22:56:55.694695    1228 main.go:141] libmachine: Creating VM...
	I0903 22:56:55.694695    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 22:56:58.488314    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 22:56:58.488376    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:56:58.488376    1228 main.go:141] libmachine: Using switch "Default Switch"
	I0903 22:56:58.488376    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 22:57:00.159456    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 22:57:00.160163    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:00.160163    1228 main.go:141] libmachine: Creating VHD
	I0903 22:57:00.160368    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0903 22:57:03.740838    1228 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : ABB2571A-43DB-4DDE-8704-A111EA40BF83
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0903 22:57:03.741294    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:03.741294    1228 main.go:141] libmachine: Writing magic tar header
	I0903 22:57:03.741294    1228 main.go:141] libmachine: Writing SSH key tar header
	I0903 22:57:03.753725    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0903 22:57:06.785123    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:06.785237    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:06.785359    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\disk.vhd' -SizeBytes 20000MB
	I0903 22:57:09.246904    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:09.247119    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:09.247236    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-270000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0903 22:57:12.884912    1228 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-270000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0903 22:57:12.885137    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:12.885137    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-270000 -DynamicMemoryEnabled $false
	I0903 22:57:15.057904    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:15.058186    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:15.058186    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-270000 -Count 2
	I0903 22:57:17.162536    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:17.162627    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:17.162627    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-270000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\boot2docker.iso'
	I0903 22:57:19.686005    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:19.686005    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:19.686896    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-270000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\disk.vhd'
	I0903 22:57:22.319331    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:22.319331    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:22.320086    1228 main.go:141] libmachine: Starting VM...
	I0903 22:57:22.320150    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-270000
	I0903 22:57:25.353871    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:25.353871    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:25.353871    1228 main.go:141] libmachine: Waiting for host to start...
	I0903 22:57:25.353871    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:27.536869    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:27.537246    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:27.537323    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:29.943283    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:29.943462    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:30.944809    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:33.070346    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:33.071425    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:33.071425    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:35.532133    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:35.532133    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:36.533608    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:38.672634    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:38.672634    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:38.672884    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:41.182817    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:41.182817    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:42.183902    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:44.381000    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:44.381220    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:44.381220    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:46.853303    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:46.853946    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:47.854276    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:49.941137    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:49.941137    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:49.941137    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:52.463528    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:57:52.463528    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:52.463852    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:54.507713    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:54.507713    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:54.507713    1228 machine.go:93] provisionDockerMachine start ...
	I0903 22:57:54.508737    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:56.547110    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:56.547630    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:56.547630    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:59.000195    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:57:59.001319    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:59.007347    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:57:59.022361    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:57:59.022361    1228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 22:57:59.161919    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 22:57:59.162111    1228 buildroot.go:166] provisioning hostname "ha-270000"
	I0903 22:57:59.162186    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:01.167715    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:01.167715    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:01.167715    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:03.603986    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:03.604287    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:03.613095    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:03.613891    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:03.613891    1228 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-270000 && echo "ha-270000" | sudo tee /etc/hostname
	I0903 22:58:03.781937    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-270000
	
	I0903 22:58:03.782069    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:05.806968    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:05.807221    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:05.807221    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:08.214073    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:08.214073    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:08.220343    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:08.220903    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:08.220903    1228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-270000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-270000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-270000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 22:58:08.371173    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 22:58:08.371173    1228 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0903 22:58:08.371880    1228 buildroot.go:174] setting up certificates
	I0903 22:58:08.371880    1228 provision.go:84] configureAuth start
	I0903 22:58:08.371880    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:10.384354    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:10.385445    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:10.385589    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:12.797827    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:12.797827    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:12.797827    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:14.827178    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:14.827488    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:14.827557    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:17.256928    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:17.257487    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:17.257487    1228 provision.go:143] copyHostCerts
	I0903 22:58:17.257487    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0903 22:58:17.258601    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0903 22:58:17.258601    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0903 22:58:17.259378    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0903 22:58:17.260678    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0903 22:58:17.260925    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0903 22:58:17.260925    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0903 22:58:17.261529    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0903 22:58:17.262247    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0903 22:58:17.262895    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0903 22:58:17.262895    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0903 22:58:17.263660    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0903 22:58:17.264895    1228 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-270000 san=[127.0.0.1 172.25.116.52 ha-270000 localhost minikube]
	I0903 22:58:17.319551    1228 provision.go:177] copyRemoteCerts
	I0903 22:58:17.331089    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 22:58:17.331089    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:19.404625    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:19.404625    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:19.405155    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:21.798001    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:21.798199    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:21.798706    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:58:21.913655    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5823926s)
	I0903 22:58:21.913655    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0903 22:58:21.913877    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0903 22:58:21.963666    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0903 22:58:21.964199    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 22:58:22.018058    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0903 22:58:22.018058    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 22:58:22.083090    1228 provision.go:87] duration metric: took 13.7109324s to configureAuth
	I0903 22:58:22.083165    1228 buildroot.go:189] setting minikube options for container-runtime
	I0903 22:58:22.083815    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 22:58:22.083915    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:24.183310    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:24.183961    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:24.184029    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:26.736388    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:26.736388    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:26.742143    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:26.742976    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:26.743061    1228 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0903 22:58:26.879239    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0903 22:58:26.879322    1228 buildroot.go:70] root file system type: tmpfs
	I0903 22:58:26.879432    1228 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0903 22:58:26.879432    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:28.965030    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:28.965187    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:28.965187    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:31.360709    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:31.360709    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:31.367391    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:31.368210    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:31.368210    1228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0903 22:58:31.539576    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0903 22:58:31.539705    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:33.562683    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:33.562729    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:33.562729    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:35.965536    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:35.965883    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:35.970665    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:35.971426    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:35.971426    1228 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0903 22:58:37.383503    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0903 22:58:37.383503    1228 machine.go:96] duration metric: took 42.8752051s to provisionDockerMachine
	I0903 22:58:37.383503    1228 client.go:171] duration metric: took 1m51.4562454s to LocalClient.Create
	I0903 22:58:37.383503    1228 start.go:167] duration metric: took 1m51.4562886s to libmachine.API.Create "ha-270000"
	I0903 22:58:37.383503    1228 start.go:293] postStartSetup for "ha-270000" (driver="hyperv")
	I0903 22:58:37.383503    1228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 22:58:37.397578    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 22:58:37.397578    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:39.484896    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:39.484940    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:39.484940    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:41.882170    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:41.883164    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:41.883643    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:58:41.990226    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.592585s)
	I0903 22:58:42.002247    1228 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 22:58:42.009186    1228 info.go:137] Remote host: Buildroot 2025.02
	I0903 22:58:42.009186    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0903 22:58:42.009366    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0903 22:58:42.010687    1228 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0903 22:58:42.010774    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0903 22:58:42.022820    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 22:58:42.045622    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0903 22:58:42.096557    1228 start.go:296] duration metric: took 4.7129898s for postStartSetup
	I0903 22:58:42.100709    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:44.072469    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:44.073264    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:44.073264    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:46.511540    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:46.511685    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:46.512018    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 22:58:46.515059    1228 start.go:128] duration metric: took 2m0.5937211s to createHost
	I0903 22:58:46.515247    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:48.530992    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:48.530992    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:48.530992    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:50.975901    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:50.975901    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:50.982075    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:50.982075    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:50.982075    1228 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 22:58:51.110851    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756940331.135952045
	
	I0903 22:58:51.110851    1228 fix.go:216] guest clock: 1756940331.135952045
	I0903 22:58:51.110851    1228 fix.go:229] Guest: 2025-09-03 22:58:51.135952045 +0000 UTC Remote: 2025-09-03 22:58:46.5151379 +0000 UTC m=+126.059727001 (delta=4.620814145s)
	I0903 22:58:51.110851    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:53.204581    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:53.204581    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:53.205518    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:55.673742    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:55.673742    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:55.682995    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:55.683822    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:55.683822    1228 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756940331
	I0903 22:58:55.843455    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Sep  3 22:58:51 UTC 2025
	
	I0903 22:58:55.843455    1228 fix.go:236] clock set: Wed Sep  3 22:58:51 UTC 2025
	 (err=<nil>)
	I0903 22:58:55.843455    1228 start.go:83] releasing machines lock for "ha-270000", held for 2m9.9221404s
	I0903 22:58:55.843455    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:57.907698    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:57.907698    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:57.907698    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:59:00.349438    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:59:00.349438    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:00.354365    1228 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0903 22:59:00.354523    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:00.365673    1228 ssh_runner.go:195] Run: cat /version.json
	I0903 22:59:00.365673    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:02.475191    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:02.475191    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:02.475424    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:59:02.475424    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:02.475424    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:02.475424    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:59:05.066510    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:59:05.066545    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:05.067165    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:59:05.095848    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:59:05.095848    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:05.096453    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:59:05.161490    1228 ssh_runner.go:235] Completed: cat /version.json: (4.795752s)
	I0903 22:59:05.175138    1228 ssh_runner.go:195] Run: systemctl --version
	I0903 22:59:05.179441    1228 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8250107s)
	W0903 22:59:05.179441    1228 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0903 22:59:05.197186    1228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 22:59:05.205723    1228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 22:59:05.218533    1228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 22:59:05.247657    1228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 22:59:05.247687    1228 start.go:495] detecting cgroup driver to use...
	I0903 22:59:05.247687    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 22:59:05.301007    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0903 22:59:05.334929    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0903 22:59:05.344907    1228 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0903 22:59:05.344907    1228 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0903 22:59:05.363500    1228 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0903 22:59:05.375785    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0903 22:59:05.408814    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 22:59:05.443170    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0903 22:59:05.474233    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 22:59:05.521546    1228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 22:59:05.553883    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0903 22:59:05.588468    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0903 22:59:05.621710    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0903 22:59:05.654708    1228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 22:59:05.671454    1228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 22:59:05.683039    1228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 22:59:05.712788    1228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 22:59:05.744475    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:05.959396    1228 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0903 22:59:06.024145    1228 start.go:495] detecting cgroup driver to use...
	I0903 22:59:06.035150    1228 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0903 22:59:06.076305    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 22:59:06.110621    1228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 22:59:06.156621    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 22:59:06.192292    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 22:59:06.230075    1228 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0903 22:59:06.299289    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 22:59:06.323084    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 22:59:06.368994    1228 ssh_runner.go:195] Run: which cri-dockerd
	I0903 22:59:06.388692    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0903 22:59:06.408906    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0903 22:59:06.456171    1228 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0903 22:59:06.678583    1228 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0903 22:59:06.879109    1228 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0903 22:59:06.879109    1228 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0903 22:59:06.925446    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 22:59:06.959185    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:07.199630    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 22:59:07.386342    1228 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0903 22:59:07.423569    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 22:59:07.460030    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 22:59:07.505005    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:07.752256    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 22:59:08.742023    1228 retry.go:31] will retry after 1.216709918s: docker not running
	I0903 22:59:09.972425    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 22:59:10.015636    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0903 22:59:10.054647    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 22:59:10.095481    1228 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0903 22:59:10.329998    1228 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0903 22:59:10.572624    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:10.798123    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0903 22:59:10.856052    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0903 22:59:10.890575    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:11.116450    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0903 22:59:11.266811    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 22:59:11.293390    1228 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0903 22:59:11.307088    1228 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0903 22:59:11.316839    1228 start.go:563] Will wait 60s for crictl version
	I0903 22:59:11.328011    1228 ssh_runner.go:195] Run: which crictl
	I0903 22:59:11.345410    1228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 22:59:11.395997    1228 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0903 22:59:11.406747    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 22:59:11.450319    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 22:59:11.494154    1228 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0903 22:59:11.494154    1228 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0903 22:59:11.498371    1228 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0903 22:59:11.499132    1228 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0903 22:59:11.499132    1228 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0903 22:59:11.499132    1228 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0903 22:59:11.502130    1228 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0903 22:59:11.502130    1228 ip.go:218] interface addr: 172.25.112.1/20
	I0903 22:59:11.514219    1228 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0903 22:59:11.519778    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 22:59:11.553173    1228 kubeadm.go:875] updating cluster {Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 22:59:11.553489    1228 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 22:59:11.563125    1228 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0903 22:59:11.586673    1228 docker.go:691] Got preloaded images: 
	I0903 22:59:11.586673    1228 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0903 22:59:11.598082    1228 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0903 22:59:11.629359    1228 ssh_runner.go:195] Run: which lz4
	I0903 22:59:11.635686    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0903 22:59:11.648294    1228 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 22:59:11.654997    1228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 22:59:11.655342    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0903 22:59:13.296067    1228 docker.go:655] duration metric: took 1.6600156s to copy over tarball
	I0903 22:59:13.307060    1228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 22:59:20.865355    1228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (7.5581917s)
	I0903 22:59:20.865462    1228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 22:59:20.933546    1228 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0903 22:59:20.957804    1228 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0903 22:59:21.002763    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 22:59:21.038331    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:21.284391    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 22:59:23.488193    1228 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.2036927s)
	I0903 22:59:23.500025    1228 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0903 22:59:23.529727    1228 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0903 22:59:23.529794    1228 cache_images.go:85] Images are preloaded, skipping loading
	I0903 22:59:23.529794    1228 kubeadm.go:926] updating node { 172.25.116.52 8443 v1.34.0 docker true true} ...
	I0903 22:59:23.529794    1228 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-270000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.116.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 22:59:23.540680    1228 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0903 22:59:23.608509    1228 cni.go:84] Creating CNI manager for ""
	I0903 22:59:23.608543    1228 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0903 22:59:23.608628    1228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 22:59:23.608709    1228 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.116.52 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-270000 NodeName:ha-270000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.116.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.116.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 22:59:23.608738    1228 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.116.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-270000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.116.52"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.116.52"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 22:59:23.608738    1228 kube-vip.go:115] generating kube-vip config ...
	I0903 22:59:23.621691    1228 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0903 22:59:23.653809    1228 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0903 22:59:23.654126    1228 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0903 22:59:23.666808    1228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 22:59:23.685598    1228 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 22:59:23.698227    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0903 22:59:23.718760    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0903 22:59:23.755199    1228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 22:59:23.789033    1228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0903 22:59:23.822970    1228 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0903 22:59:23.874633    1228 ssh_runner.go:195] Run: grep 172.25.127.254	control-plane.minikube.internal$ /etc/hosts
	I0903 22:59:23.882338    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 22:59:23.915252    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:24.145793    1228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 22:59:24.197926    1228 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000 for IP: 172.25.116.52
	I0903 22:59:24.197956    1228 certs.go:194] generating shared ca certs ...
	I0903 22:59:24.198007    1228 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:24.198893    1228 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0903 22:59:24.199396    1228 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0903 22:59:24.199456    1228 certs.go:256] generating profile certs ...
	I0903 22:59:24.200409    1228 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.key
	I0903 22:59:24.200591    1228 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.crt with IP's: []
	I0903 22:59:24.887505    1228 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.crt ...
	I0903 22:59:24.887505    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.crt: {Name:mkb7aaa1eac443ddcdcabb4cef5bb739e9d38af9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:24.888985    1228 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.key ...
	I0903 22:59:24.888985    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.key: {Name:mkc5b79577653c8f04349871260874ebd30aa001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:24.889458    1228 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.9f1c9bfe
	I0903 22:59:24.889458    1228 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.9f1c9bfe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.116.52 172.25.127.254]
	I0903 22:59:24.972533    1228 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.9f1c9bfe ...
	I0903 22:59:24.972533    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.9f1c9bfe: {Name:mkc5ecbd182ead24488b0bd7ce60227ca749e5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:24.974572    1228 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.9f1c9bfe ...
	I0903 22:59:24.974572    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.9f1c9bfe: {Name:mk823ce6e6d376d463e4c5c3be67b708c72c9bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:24.977545    1228 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.9f1c9bfe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt
	I0903 22:59:24.988484    1228 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.9f1c9bfe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key
	I0903 22:59:24.990570    1228 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key
	I0903 22:59:24.990570    1228 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt with IP's: []
	I0903 22:59:25.149477    1228 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt ...
	I0903 22:59:25.149477    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt: {Name:mk37d7e8a33d45a73e07c2e5522d69b31733f450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:25.151837    1228 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key ...
	I0903 22:59:25.151837    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key: {Name:mk79e180a6069c7b0284816924a3968ca51e1f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:25.152837    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0903 22:59:25.153300    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0903 22:59:25.153574    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0903 22:59:25.153844    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0903 22:59:25.154060    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0903 22:59:25.154230    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0903 22:59:25.154230    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0903 22:59:25.166019    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0903 22:59:25.166999    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0903 22:59:25.167916    1228 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0903 22:59:25.167916    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0903 22:59:25.168261    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0903 22:59:25.168626    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0903 22:59:25.169204    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0903 22:59:25.169514    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0903 22:59:25.169514    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0903 22:59:25.170418    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0903 22:59:25.170418    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:59:25.171306    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 22:59:25.225766    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 22:59:25.280504    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 22:59:25.336761    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0903 22:59:25.415957    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0903 22:59:25.487966    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 22:59:25.562028    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 22:59:25.627079    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 22:59:25.686633    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0903 22:59:25.741231    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0903 22:59:25.787473    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 22:59:25.839569    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 22:59:25.887378    1228 ssh_runner.go:195] Run: openssl version
	I0903 22:59:25.908881    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0903 22:59:25.942912    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0903 22:59:25.949548    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0903 22:59:25.961629    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0903 22:59:25.987716    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0903 22:59:26.022131    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0903 22:59:26.067732    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0903 22:59:26.076973    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0903 22:59:26.091872    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0903 22:59:26.119482    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 22:59:26.163297    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 22:59:26.223621    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:59:26.232849    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:59:26.248494    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:59:26.276462    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 22:59:26.316381    1228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 22:59:26.324206    1228 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 22:59:26.324684    1228 kubeadm.go:392] StartCluster: {Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 22:59:26.336642    1228 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0903 22:59:26.375638    1228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 22:59:26.421667    1228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 22:59:26.462196    1228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 22:59:26.486171    1228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 22:59:26.486171    1228 kubeadm.go:157] found existing configuration files:
	
	I0903 22:59:26.498171    1228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 22:59:26.516161    1228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 22:59:26.527161    1228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 22:59:26.563353    1228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 22:59:26.585736    1228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 22:59:26.596787    1228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 22:59:26.626847    1228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 22:59:26.650644    1228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 22:59:26.664473    1228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 22:59:26.699851    1228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 22:59:26.719594    1228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 22:59:26.731666    1228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 22:59:26.751379    1228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 22:59:26.978629    1228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 22:59:44.810760    1228 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0903 22:59:44.810760    1228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 22:59:44.810760    1228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 22:59:44.811527    1228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 22:59:44.811679    1228 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0903 22:59:44.811679    1228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 22:59:44.815084    1228 out.go:252]   - Generating certificates and keys ...
	I0903 22:59:44.815205    1228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 22:59:44.815205    1228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 22:59:44.815205    1228 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0903 22:59:44.815882    1228 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0903 22:59:44.815882    1228 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0903 22:59:44.815882    1228 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0903 22:59:44.815882    1228 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0903 22:59:44.816689    1228 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-270000 localhost] and IPs [172.25.116.52 127.0.0.1 ::1]
	I0903 22:59:44.816754    1228 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0903 22:59:44.816754    1228 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-270000 localhost] and IPs [172.25.116.52 127.0.0.1 ::1]
	I0903 22:59:44.816754    1228 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0903 22:59:44.817441    1228 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0903 22:59:44.817530    1228 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0903 22:59:44.817719    1228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 22:59:44.817719    1228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 22:59:44.817719    1228 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0903 22:59:44.817719    1228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 22:59:44.818413    1228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 22:59:44.818600    1228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 22:59:44.818698    1228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 22:59:44.818698    1228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 22:59:44.822293    1228 out.go:252]   - Booting up control plane ...
	I0903 22:59:44.822293    1228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 22:59:44.822293    1228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 22:59:44.823100    1228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 22:59:44.823123    1228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 22:59:44.823123    1228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0903 22:59:44.823706    1228 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0903 22:59:44.823871    1228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 22:59:44.823871    1228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 22:59:44.823871    1228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0903 22:59:44.823871    1228 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0903 22:59:44.823871    1228 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.57863ms
	I0903 22:59:44.823871    1228 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0903 22:59:44.823871    1228 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://172.25.116.52:8443/livez
	I0903 22:59:44.825077    1228 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0903 22:59:44.825312    1228 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0903 22:59:44.825470    1228 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 5.564494726s
	I0903 22:59:44.825512    1228 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 6.900831099s
	I0903 22:59:44.825859    1228 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 10.003335069s
	I0903 22:59:44.825911    1228 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0903 22:59:44.825911    1228 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0903 22:59:44.826613    1228 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0903 22:59:44.826763    1228 kubeadm.go:310] [mark-control-plane] Marking the node ha-270000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0903 22:59:44.827359    1228 kubeadm.go:310] [bootstrap-token] Using token: 128eq1.2kh3zrs5ds3cj6iy
	I0903 22:59:44.830041    1228 out.go:252]   - Configuring RBAC rules ...
	I0903 22:59:44.830354    1228 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0903 22:59:44.830509    1228 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0903 22:59:44.830972    1228 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0903 22:59:44.831410    1228 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0903 22:59:44.831817    1228 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0903 22:59:44.832156    1228 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0903 22:59:44.832350    1228 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0903 22:59:44.832423    1228 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0903 22:59:44.832600    1228 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0903 22:59:44.832600    1228 kubeadm.go:310] 
	I0903 22:59:44.832600    1228 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0903 22:59:44.832801    1228 kubeadm.go:310] 
	I0903 22:59:44.833058    1228 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0903 22:59:44.833099    1228 kubeadm.go:310] 
	I0903 22:59:44.833099    1228 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0903 22:59:44.833099    1228 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0903 22:59:44.833099    1228 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0903 22:59:44.833099    1228 kubeadm.go:310] 
	I0903 22:59:44.833099    1228 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0903 22:59:44.833099    1228 kubeadm.go:310] 
	I0903 22:59:44.833099    1228 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0903 22:59:44.833099    1228 kubeadm.go:310] 
	I0903 22:59:44.833718    1228 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0903 22:59:44.833815    1228 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0903 22:59:44.833815    1228 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0903 22:59:44.833815    1228 kubeadm.go:310] 
	I0903 22:59:44.833815    1228 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0903 22:59:44.834388    1228 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0903 22:59:44.834426    1228 kubeadm.go:310] 
	I0903 22:59:44.834426    1228 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 128eq1.2kh3zrs5ds3cj6iy \
	I0903 22:59:44.834426    1228 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 \
	I0903 22:59:44.834426    1228 kubeadm.go:310] 	--control-plane 
	I0903 22:59:44.834426    1228 kubeadm.go:310] 
	I0903 22:59:44.835022    1228 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0903 22:59:44.835134    1228 kubeadm.go:310] 
	I0903 22:59:44.835206    1228 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 128eq1.2kh3zrs5ds3cj6iy \
	I0903 22:59:44.835206    1228 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 
	I0903 22:59:44.835206    1228 cni.go:84] Creating CNI manager for ""
	I0903 22:59:44.835206    1228 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0903 22:59:44.838358    1228 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0903 22:59:44.854466    1228 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0903 22:59:44.864658    1228 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0903 22:59:44.864658    1228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0903 22:59:44.918774    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0903 22:59:45.313364    1228 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0903 22:59:45.329241    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:45.332401    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-270000 minikube.k8s.io/updated_at=2025_09_03T22_59_45_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb minikube.k8s.io/name=ha-270000 minikube.k8s.io/primary=true
	I0903 22:59:45.393884    1228 ops.go:34] apiserver oom_adj: -16
	I0903 22:59:45.547506    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:46.046702    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:46.546264    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:47.045032    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:47.547870    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:48.047805    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:48.548063    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:49.047476    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:49.547038    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:49.705088    1228 kubeadm.go:1105] duration metric: took 4.3915219s to wait for elevateKubeSystemPrivileges
	I0903 22:59:49.705088    1228 kubeadm.go:394] duration metric: took 23.3801619s to StartCluster
	I0903 22:59:49.705088    1228 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:49.705088    1228 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0903 22:59:49.707409    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:49.708771    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0903 22:59:49.708771    1228 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 22:59:49.709312    1228 start.go:241] waiting for startup goroutines ...
	I0903 22:59:49.708771    1228 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0903 22:59:49.709487    1228 addons.go:69] Setting storage-provisioner=true in profile "ha-270000"
	I0903 22:59:49.709487    1228 addons.go:69] Setting default-storageclass=true in profile "ha-270000"
	I0903 22:59:49.709487    1228 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-270000"
	I0903 22:59:49.709487    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 22:59:49.709487    1228 addons.go:238] Setting addon storage-provisioner=true in "ha-270000"
	I0903 22:59:49.709487    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 22:59:49.710228    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:49.711086    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:49.888307    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0903 22:59:50.342829    1228 start.go:976] {"host.minikube.internal": 172.25.112.1} host record injected into CoreDNS's ConfigMap
	I0903 22:59:51.992108    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:51.992165    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:51.993027    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:51.993027    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:51.995201    1228 kapi.go:59] client config for ha-270000: &rest.Config{Host:"https://172.25.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0903 22:59:51.995201    1228 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 22:59:51.997683    1228 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0903 22:59:51.997855    1228 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0903 22:59:51.997921    1228 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0903 22:59:51.997950    1228 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0903 22:59:51.997950    1228 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0903 22:59:51.997968    1228 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0903 22:59:51.998492    1228 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 22:59:51.998523    1228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 22:59:51.998523    1228 addons.go:238] Setting addon default-storageclass=true in "ha-270000"
	I0903 22:59:51.998523    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:51.998523    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 22:59:51.999775    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:59:54.396260    1228 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 22:59:54.396260    1228 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:56.602798    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:56.603007    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:56.603086    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:59:57.131273    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:59:57.132296    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:57.134594    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:59:57.292276    1228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 22:59:59.195658    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:59:59.196377    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:59.196748    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:59:59.345758    1228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 22:59:59.521179    1228 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0903 22:59:59.523303    1228 addons.go:514] duration metric: took 9.8143984s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0903 22:59:59.523303    1228 start.go:246] waiting for cluster config update ...
	I0903 22:59:59.523303    1228 start.go:255] writing updated cluster config ...
	I0903 22:59:59.529168    1228 out.go:203] 
	I0903 22:59:59.543756    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 22:59:59.543756    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 22:59:59.551829    1228 out.go:179] * Starting "ha-270000-m02" control-plane node in "ha-270000" cluster
	I0903 22:59:59.555929    1228 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 22:59:59.555929    1228 cache.go:58] Caching tarball of preloaded images
	I0903 22:59:59.556753    1228 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0903 22:59:59.556753    1228 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0903 22:59:59.556753    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 22:59:59.564917    1228 start.go:360] acquireMachinesLock for ha-270000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 22:59:59.564995    1228 start.go:364] duration metric: took 77.6µs to acquireMachinesLock for "ha-270000-m02"
	I0903 22:59:59.564995    1228 start.go:93] Provisioning new machine with config: &{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 22:59:59.564995    1228 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0903 22:59:59.567675    1228 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 22:59:59.567675    1228 start.go:159] libmachine.API.Create for "ha-270000" (driver="hyperv")
	I0903 22:59:59.567675    1228 client.go:168] LocalClient.Create starting
	I0903 22:59:59.568588    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0903 22:59:59.568588    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 22:59:59.568588    1228 main.go:141] libmachine: Parsing certificate...
	I0903 22:59:59.568588    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0903 22:59:59.569581    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 22:59:59.569581    1228 main.go:141] libmachine: Parsing certificate...
	I0903 22:59:59.569581    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0903 23:00:01.434365    1228 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0903 23:00:01.434365    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:01.434365    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0903 23:00:03.206336    1228 main.go:141] libmachine: [stdout =====>] : False
	
	I0903 23:00:03.206456    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:03.206511    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 23:00:04.694654    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 23:00:04.694882    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:04.694882    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 23:00:08.310297    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 23:00:08.310547    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:08.312413    1228 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 23:00:08.977686    1228 main.go:141] libmachine: Creating SSH key...
	I0903 23:00:09.279116    1228 main.go:141] libmachine: Creating VM...
	I0903 23:00:09.279116    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 23:00:12.161709    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 23:00:12.162233    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:12.162233    1228 main.go:141] libmachine: Using switch "Default Switch"
	I0903 23:00:12.162233    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 23:00:13.955171    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 23:00:13.955470    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:13.955470    1228 main.go:141] libmachine: Creating VHD
	I0903 23:00:13.955470    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0903 23:00:17.582383    1228 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C6F9CF35-BAE2-447C-9334-441540916198
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0903 23:00:17.582633    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:17.582633    1228 main.go:141] libmachine: Writing magic tar header
	I0903 23:00:17.582713    1228 main.go:141] libmachine: Writing SSH key tar header
	I0903 23:00:17.597666    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0903 23:00:20.786085    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:20.786932    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:20.787028    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\disk.vhd' -SizeBytes 20000MB
	I0903 23:00:23.441578    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:23.441578    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:23.441578    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-270000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0903 23:00:27.038138    1228 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-270000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0903 23:00:27.038138    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:27.038900    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-270000-m02 -DynamicMemoryEnabled $false
	I0903 23:00:29.214762    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:29.215160    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:29.215160    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-270000-m02 -Count 2
	I0903 23:00:31.343129    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:31.343129    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:31.343895    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-270000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\boot2docker.iso'
	I0903 23:00:33.882312    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:33.883472    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:33.883536    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-270000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\disk.vhd'
	I0903 23:00:36.511924    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:36.511924    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:36.511924    1228 main.go:141] libmachine: Starting VM...
	I0903 23:00:36.511924    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-270000-m02
	I0903 23:00:39.604406    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:39.604406    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:39.604406    1228 main.go:141] libmachine: Waiting for host to start...
	I0903 23:00:39.605534    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:00:41.809752    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:00:41.809791    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:41.809875    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:00:44.263721    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:44.263721    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:45.264365    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:00:47.388052    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:00:47.388052    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:47.388853    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:00:49.908592    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:49.908592    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:50.910491    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:00:53.055638    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:00:53.055778    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:53.055866    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:00:55.506199    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:55.506199    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:56.506649    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:00:58.645358    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:00:58.645358    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:58.645568    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:01.111641    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:01:01.111641    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:02.112820    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:04.279882    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:04.280127    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:04.280127    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:06.946702    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:06.947149    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:06.947149    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:09.064463    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:09.064463    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:09.064463    1228 machine.go:93] provisionDockerMachine start ...
	I0903 23:01:09.065092    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:11.249147    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:11.249388    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:11.249449    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:13.780965    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:13.780965    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:13.787233    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:13.802486    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:13.802486    1228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:01:13.944458    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:01:13.944458    1228 buildroot.go:166] provisioning hostname "ha-270000-m02"
	I0903 23:01:13.944555    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:16.014848    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:16.015598    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:16.015728    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:18.480070    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:18.480070    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:18.486139    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:18.487034    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:18.487034    1228 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-270000-m02 && echo "ha-270000-m02" | sudo tee /etc/hostname
	I0903 23:01:18.653822    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-270000-m02
	
	I0903 23:01:18.653962    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:20.738564    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:20.739272    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:20.739272    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:23.309791    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:23.309898    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:23.316369    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:23.317085    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:23.317085    1228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-270000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-270000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-270000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:01:23.482022    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:01:23.482022    1228 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0903 23:01:23.482022    1228 buildroot.go:174] setting up certificates
	I0903 23:01:23.482022    1228 provision.go:84] configureAuth start
	I0903 23:01:23.482022    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:25.567525    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:25.568320    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:25.568320    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:28.109138    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:28.109138    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:28.109215    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:30.178520    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:30.178520    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:30.179071    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:32.682960    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:32.682960    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:32.683078    1228 provision.go:143] copyHostCerts
	I0903 23:01:32.683205    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0903 23:01:32.683540    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0903 23:01:32.683540    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0903 23:01:32.684090    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0903 23:01:32.685403    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0903 23:01:32.685890    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0903 23:01:32.685890    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0903 23:01:32.686412    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0903 23:01:32.687791    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0903 23:01:32.688071    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0903 23:01:32.688071    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0903 23:01:32.688554    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0903 23:01:32.689918    1228 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-270000-m02 san=[127.0.0.1 172.25.120.53 ha-270000-m02 localhost minikube]
	I0903 23:01:33.223764    1228 provision.go:177] copyRemoteCerts
	I0903 23:01:33.236539    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:01:33.236702    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:35.330576    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:35.330765    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:35.330886    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:37.884642    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:37.885754    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:37.886165    1228 sshutil.go:53] new ssh client: &{IP:172.25.120.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\id_rsa Username:docker}
	I0903 23:01:38.016251    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7796477s)
	I0903 23:01:38.016251    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0903 23:01:38.016845    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:01:38.083946    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0903 23:01:38.084108    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0903 23:01:38.150514    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0903 23:01:38.151048    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:01:38.216537    1228 provision.go:87] duration metric: took 14.7343143s to configureAuth
	I0903 23:01:38.216537    1228 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:01:38.216537    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:01:38.217449    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:40.284382    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:40.284382    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:40.284611    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:42.834889    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:42.834889    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:42.841328    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:42.841846    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:42.841846    1228 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0903 23:01:42.975282    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0903 23:01:42.975282    1228 buildroot.go:70] root file system type: tmpfs
	I0903 23:01:42.975469    1228 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0903 23:01:42.975693    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:45.033543    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:45.033543    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:45.033543    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:47.532694    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:47.532833    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:47.539314    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:47.539996    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:47.539996    1228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.25.116.52"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0903 23:01:47.722259    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.25.116.52
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0903 23:01:47.723589    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:49.829978    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:49.829978    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:49.831091    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:52.405256    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:52.406355    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:52.413533    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:52.414293    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:52.414293    1228 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0903 23:01:53.833755    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0903 23:01:53.833755    1228 machine.go:96] duration metric: took 44.7686825s to provisionDockerMachine
	I0903 23:01:53.833755    1228 client.go:171] duration metric: took 1m54.2645318s to LocalClient.Create
	I0903 23:01:53.833755    1228 start.go:167] duration metric: took 1m54.2645318s to libmachine.API.Create "ha-270000"
	I0903 23:01:53.833755    1228 start.go:293] postStartSetup for "ha-270000-m02" (driver="hyperv")
	I0903 23:01:53.833755    1228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:01:53.845728    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:01:53.845728    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:55.934260    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:55.934347    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:55.934347    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:58.385956    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:58.385956    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:58.387249    1228 sshutil.go:53] new ssh client: &{IP:172.25.120.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\id_rsa Username:docker}
	I0903 23:01:58.495931    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6491316s)
	I0903 23:01:58.508634    1228 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:01:58.516214    1228 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:01:58.516335    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0903 23:01:58.516489    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0903 23:01:58.518016    1228 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0903 23:01:58.518016    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0903 23:01:58.529751    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:01:58.550120    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0903 23:01:58.602436    1228 start.go:296] duration metric: took 4.7686159s for postStartSetup
	I0903 23:01:58.605183    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:00.669905    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:00.669905    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:00.669905    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:03.168966    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:03.169974    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:03.170221    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 23:02:03.172746    1228 start.go:128] duration metric: took 2m3.6060749s to createHost
	I0903 23:02:03.172746    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:05.235750    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:05.235750    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:05.235999    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:07.722483    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:07.723334    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:07.728834    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:02:07.729508    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:02:07.729508    1228 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:02:07.865315    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756940527.890251015
	
	I0903 23:02:07.865315    1228 fix.go:216] guest clock: 1756940527.890251015
	I0903 23:02:07.865315    1228 fix.go:229] Guest: 2025-09-03 23:02:07.890251015 +0000 UTC Remote: 2025-09-03 23:02:03.1727465 +0000 UTC m=+322.714665501 (delta=4.717504515s)
	I0903 23:02:07.865541    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:09.900597    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:09.900687    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:09.900794    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:12.419556    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:12.419883    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:12.425786    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:02:12.426693    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:02:12.426693    1228 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756940527
	I0903 23:02:12.578947    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Sep  3 23:02:07 UTC 2025
	
	I0903 23:02:12.579084    1228 fix.go:236] clock set: Wed Sep  3 23:02:07 UTC 2025
	 (err=<nil>)
	I0903 23:02:12.579084    1228 start.go:83] releasing machines lock for "ha-270000-m02", held for 2m13.0122836s
	I0903 23:02:12.579294    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:14.641126    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:14.641126    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:14.641126    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:17.108916    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:17.109452    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:17.114623    1228 out.go:179] * Found network options:
	I0903 23:02:17.121169    1228 out.go:179]   - NO_PROXY=172.25.116.52
	W0903 23:02:17.125661    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	I0903 23:02:17.131986    1228 out.go:179]   - NO_PROXY=172.25.116.52
	W0903 23:02:17.136767    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	W0903 23:02:17.138489    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	I0903 23:02:17.141219    1228 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0903 23:02:17.141352    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:17.151382    1228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0903 23:02:17.151382    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:19.268373    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:19.268373    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:19.268373    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:19.285622    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:19.285622    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:19.285880    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:21.871079    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:21.871892    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:21.872024    1228 sshutil.go:53] new ssh client: &{IP:172.25.120.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\id_rsa Username:docker}
	I0903 23:02:21.905051    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:21.905661    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:21.906178    1228 sshutil.go:53] new ssh client: &{IP:172.25.120.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\id_rsa Username:docker}
	I0903 23:02:21.980733    1228 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8292848s)
	W0903 23:02:21.980818    1228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:02:21.994134    1228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:02:21.998622    1228 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8573369s)
	W0903 23:02:21.998622    1228 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0903 23:02:22.037084    1228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:02:22.037084    1228 start.go:495] detecting cgroup driver to use...
	I0903 23:02:22.037485    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:02:22.108232    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W0903 23:02:22.112524    1228 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0903 23:02:22.112524    1228 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0903 23:02:22.152392    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0903 23:02:22.180822    1228 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0903 23:02:22.194781    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0903 23:02:22.233790    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 23:02:22.271175    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0903 23:02:22.314751    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 23:02:22.356383    1228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:02:22.394152    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0903 23:02:22.434437    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0903 23:02:22.470960    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0903 23:02:22.512006    1228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:02:22.535563    1228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:02:22.548897    1228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:02:22.591913    1228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:02:22.627762    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:22.874257    1228 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0903 23:02:22.943307    1228 start.go:495] detecting cgroup driver to use...
	I0903 23:02:22.956751    1228 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0903 23:02:22.997536    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:02:23.036257    1228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:02:23.089575    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:02:23.147669    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 23:02:23.191923    1228 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0903 23:02:23.264858    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 23:02:23.290430    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:02:23.343996    1228 ssh_runner.go:195] Run: which cri-dockerd
	I0903 23:02:23.363909    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0903 23:02:23.387129    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0903 23:02:23.434981    1228 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0903 23:02:23.698368    1228 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0903 23:02:23.914214    1228 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0903 23:02:23.914272    1228 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0903 23:02:23.966599    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 23:02:24.002501    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:24.240165    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 23:02:24.407748    1228 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0903 23:02:24.446656    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:02:24.486595    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 23:02:24.531162    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:24.783527    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 23:02:25.830171    1228 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.04663s)
	I0903 23:02:25.830171    1228 retry.go:31] will retry after 730.544213ms: docker not running
	I0903 23:02:26.573593    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:02:26.612362    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0903 23:02:26.653409    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 23:02:26.694918    1228 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0903 23:02:26.919537    1228 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0903 23:02:27.150743    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:27.382354    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0903 23:02:27.450230    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0903 23:02:27.486316    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:27.716726    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0903 23:02:27.879001    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 23:02:27.904880    1228 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0903 23:02:27.916481    1228 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0903 23:02:27.925367    1228 start.go:563] Will wait 60s for crictl version
	I0903 23:02:27.937432    1228 ssh_runner.go:195] Run: which crictl
	I0903 23:02:27.959885    1228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:02:28.014104    1228 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0903 23:02:28.025671    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 23:02:28.084798    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 23:02:28.123308    1228 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0903 23:02:28.127043    1228 out.go:179]   - env NO_PROXY=172.25.116.52
	I0903 23:02:28.129459    1228 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0903 23:02:28.134066    1228 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0903 23:02:28.134066    1228 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0903 23:02:28.134066    1228 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0903 23:02:28.134066    1228 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0903 23:02:28.137043    1228 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0903 23:02:28.137043    1228 ip.go:218] interface addr: 172.25.112.1/20
	I0903 23:02:28.148693    1228 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0903 23:02:28.154833    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:02:28.178247    1228 mustload.go:65] Loading cluster: ha-270000
	I0903 23:02:28.179102    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:02:28.179799    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:02:30.176818    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:30.177914    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:30.177914    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 23:02:30.178703    1228 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000 for IP: 172.25.120.53
	I0903 23:02:30.178729    1228 certs.go:194] generating shared ca certs ...
	I0903 23:02:30.178729    1228 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:02:30.179604    1228 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0903 23:02:30.180045    1228 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0903 23:02:30.180212    1228 certs.go:256] generating profile certs ...
	I0903 23:02:30.180938    1228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.key
	I0903 23:02:30.180938    1228 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.b88c18f2
	I0903 23:02:30.180938    1228 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.b88c18f2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.116.52 172.25.120.53 172.25.127.254]
	I0903 23:02:30.395907    1228 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.b88c18f2 ...
	I0903 23:02:30.395907    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.b88c18f2: {Name:mk7aac0e6550922b9849977e7787842e204aef05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:02:30.397897    1228 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.b88c18f2 ...
	I0903 23:02:30.397897    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.b88c18f2: {Name:mk7047c75908bd73cad06091655137c8e83bc1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:02:30.398333    1228 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.b88c18f2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt
	I0903 23:02:30.415352    1228 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.b88c18f2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key
	I0903 23:02:30.417340    1228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key
	I0903 23:02:30.417340    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0903 23:02:30.417768    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0903 23:02:30.417768    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0903 23:02:30.417768    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0903 23:02:30.417768    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0903 23:02:30.418422    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0903 23:02:30.418422    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0903 23:02:30.418422    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0903 23:02:30.419238    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0903 23:02:30.419387    1228 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0903 23:02:30.419996    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0903 23:02:30.420294    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0903 23:02:30.420550    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0903 23:02:30.420773    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0903 23:02:30.421480    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0903 23:02:30.421480    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0903 23:02:30.421480    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0903 23:02:30.422018    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:02:30.422370    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:02:32.479252    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:32.479252    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:32.480109    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:34.968850    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 23:02:34.968850    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:34.969195    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 23:02:35.076314    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0903 23:02:35.085021    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0903 23:02:35.124246    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0903 23:02:35.132420    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0903 23:02:35.164678    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0903 23:02:35.173261    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0903 23:02:35.209197    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0903 23:02:35.215885    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0903 23:02:35.252342    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0903 23:02:35.259528    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0903 23:02:35.291980    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0903 23:02:35.298852    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0903 23:02:35.322093    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:02:35.378446    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:02:35.435474    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:02:35.490607    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0903 23:02:35.549811    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0903 23:02:35.606434    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:02:35.658899    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:02:35.708021    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:02:35.761504    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0903 23:02:35.810231    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0903 23:02:35.860095    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:02:35.912273    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0903 23:02:35.947580    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0903 23:02:35.983875    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0903 23:02:36.017419    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0903 23:02:36.053674    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0903 23:02:36.090219    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0903 23:02:36.129358    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0903 23:02:36.183062    1228 ssh_runner.go:195] Run: openssl version
	I0903 23:02:36.203475    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0903 23:02:36.239369    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0903 23:02:36.246639    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0903 23:02:36.258944    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0903 23:02:36.284375    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0903 23:02:36.324081    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0903 23:02:36.358547    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0903 23:02:36.365732    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0903 23:02:36.379894    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0903 23:02:36.402052    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:02:36.438811    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:02:36.476217    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:02:36.485906    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:02:36.498645    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:02:36.522003    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:02:36.559937    1228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:02:36.567668    1228 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:02:36.567668    1228 kubeadm.go:926] updating node {m02 172.25.120.53 8443 v1.34.0 docker true true} ...
	I0903 23:02:36.567668    1228 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-270000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.120.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:02:36.567668    1228 kube-vip.go:115] generating kube-vip config ...
	I0903 23:02:36.580100    1228 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0903 23:02:36.612349    1228 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0903 23:02:36.612349    1228 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0903 23:02:36.624845    1228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:02:36.645739    1228 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0': No such file or directory
	
	Initiating transfer...
	I0903 23:02:36.657760    1228 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0
	I0903 23:02:36.683565    1228 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet
	I0903 23:02:36.684508    1228 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm
	I0903 23:02:36.684508    1228 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl
	I0903 23:02:38.285454    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:02:38.315146    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet -> /var/lib/minikube/binaries/v1.34.0/kubelet
	I0903 23:02:38.327145    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet
	I0903 23:02:38.332137    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl -> /var/lib/minikube/binaries/v1.34.0/kubectl
	I0903 23:02:38.336136    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubelet': No such file or directory
	I0903 23:02:38.336136    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet --> /var/lib/minikube/binaries/v1.34.0/kubelet (59195684 bytes)
	I0903 23:02:38.344136    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl
	I0903 23:02:38.404363    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubectl': No such file or directory
	I0903 23:02:38.404363    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl --> /var/lib/minikube/binaries/v1.34.0/kubectl (60559544 bytes)
	I0903 23:02:38.522359    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm -> /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0903 23:02:38.534369    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0903 23:02:38.586706    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubeadm': No such file or directory
	I0903 23:02:38.586706    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0/kubeadm (74027192 bytes)
	I0903 23:02:39.706036    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0903 23:02:39.726158    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0903 23:02:39.760806    1228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:02:39.796905    1228 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0903 23:02:39.846006    1228 ssh_runner.go:195] Run: grep 172.25.127.254	control-plane.minikube.internal$ /etc/hosts
	I0903 23:02:39.852610    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:02:39.887503    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:40.130728    1228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:02:40.193877    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 23:02:40.194912    1228 start.go:317] joinCluster: &{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clust
erName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.120.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:02:40.195036    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0903 23:02:40.195215    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:02:42.243215    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:42.243215    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:42.243215    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:44.762246    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 23:02:44.762373    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:44.762881    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 23:02:44.973287    1228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7780989s)
	I0903 23:02:44.973430    1228 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.25.120.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:02:44.973492    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5qejmg.nad7xkhs0xgwmu9q --discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-270000-m02 --control-plane --apiserver-advertise-address=172.25.120.53 --apiserver-bind-port=8443"
	I0903 23:03:47.510383    1228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5qejmg.nad7xkhs0xgwmu9q --discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-270000-m02 --control-plane --apiserver-advertise-address=172.25.120.53 --apiserver-bind-port=8443": (1m2.5354671s)
	I0903 23:03:47.510383    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0903 23:03:48.183926    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-270000-m02 minikube.k8s.io/updated_at=2025_09_03T23_03_48_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb minikube.k8s.io/name=ha-270000 minikube.k8s.io/primary=false
	I0903 23:03:48.358296    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-270000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0903 23:03:48.522856    1228 start.go:319] duration metric: took 1m8.3270012s to joinCluster
	I0903 23:03:48.522856    1228 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.120.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:03:48.522856    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:03:48.525794    1228 out.go:179] * Verifying Kubernetes components...
	I0903 23:03:48.546149    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:03:48.832449    1228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:03:48.861524    1228 kapi.go:59] client config for ha-270000: &rest.Config{Host:"https://172.25.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0903 23:03:48.861524    1228 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.127.254:8443 with https://172.25.116.52:8443
	I0903 23:03:48.863381    1228 node_ready.go:35] waiting up to 6m0s for node "ha-270000-m02" to be "Ready" ...
	W0903 23:03:50.875715    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:03:53.370328    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:03:56.201778    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:03:58.371404    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:04:00.874342    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:04:03.370845    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:04:05.871117    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:04:08.370253    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:04:10.873593    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	I0903 23:04:13.369357    1228 node_ready.go:49] node "ha-270000-m02" is "Ready"
	I0903 23:04:13.369357    1228 node_ready.go:38] duration metric: took 24.5055398s for node "ha-270000-m02" to be "Ready" ...
	I0903 23:04:13.369357    1228 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:04:13.381776    1228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:04:13.418265    1228 api_server.go:72] duration metric: took 24.8950628s to wait for apiserver process to appear ...
	I0903 23:04:13.418343    1228 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:04:13.418343    1228 api_server.go:253] Checking apiserver healthz at https://172.25.116.52:8443/healthz ...
	I0903 23:04:13.427097    1228 api_server.go:279] https://172.25.116.52:8443/healthz returned 200:
	ok
	I0903 23:04:13.429087    1228 api_server.go:141] control plane version: v1.34.0
	I0903 23:04:13.429087    1228 api_server.go:131] duration metric: took 10.7443ms to wait for apiserver health ...
	I0903 23:04:13.429087    1228 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:04:13.448347    1228 system_pods.go:59] 17 kube-system pods found
	I0903 23:04:13.448409    1228 system_pods.go:61] "coredns-66bc5c9577-58qw9" [e4c3bec4-9c47-404e-98ff-21e0aee82931] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "coredns-66bc5c9577-cnk8d" [20226b19-1d13-4057-88c1-709997f24868] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "etcd-ha-270000" [bedaa6e6-7109-475b-b96e-34178b2a83e2] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "etcd-ha-270000-m02" [d123ed06-ba3b-4745-a419-0b7720e9e903] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kindnet-96trb" [32ea1443-99f0-4e56-99cb-d1ce43dbcb2f] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kindnet-vsgwr" [aa24d517-8c6d-4625-bd97-6f7fe1f7f72e] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-apiserver-ha-270000" [8b258bec-c81d-404f-b217-dccd40799d89] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-apiserver-ha-270000-m02" [16ba52a6-4dfc-487f-9bc9-65d94e1fffd8] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-controller-manager-ha-270000" [a695c6ed-2e2f-41ea-a250-9b01b1ae90af] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-controller-manager-ha-270000-m02" [f39fb141-4af3-4207-8f1c-1ce77b760861] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-proxy-qkts6" [8e651463-997a-4431-a14c-29557282565f] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-proxy-t96st" [f609fa93-da46-46a5-ba36-84c291da86a5] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-scheduler-ha-270000" [a257c6a6-4337-49fd-ba96-c6248221f207] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-scheduler-ha-270000-m02" [5c49ee66-b613-4b3c-9539-da558d1dd53a] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-vip-ha-270000" [4a489bea-b3e7-43bd-96e0-58c1480000a4] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-vip-ha-270000-m02" [163cfde8-7488-49ac-b241-2509a7b01d1b] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "storage-provisioner" [7643327e-078c-45c9-9a32-cdf3b7a72986] Running
	I0903 23:04:13.448409    1228 system_pods.go:74] duration metric: took 19.3217ms to wait for pod list to return data ...
	I0903 23:04:13.448409    1228 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:04:13.454264    1228 default_sa.go:45] found service account: "default"
	I0903 23:04:13.454264    1228 default_sa.go:55] duration metric: took 5.8552ms for default service account to be created ...
	I0903 23:04:13.454264    1228 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:04:13.462247    1228 system_pods.go:86] 17 kube-system pods found
	I0903 23:04:13.462302    1228 system_pods.go:89] "coredns-66bc5c9577-58qw9" [e4c3bec4-9c47-404e-98ff-21e0aee82931] Running
	I0903 23:04:13.462302    1228 system_pods.go:89] "coredns-66bc5c9577-cnk8d" [20226b19-1d13-4057-88c1-709997f24868] Running
	I0903 23:04:13.462302    1228 system_pods.go:89] "etcd-ha-270000" [bedaa6e6-7109-475b-b96e-34178b2a83e2] Running
	I0903 23:04:13.462302    1228 system_pods.go:89] "etcd-ha-270000-m02" [d123ed06-ba3b-4745-a419-0b7720e9e903] Running
	I0903 23:04:13.462368    1228 system_pods.go:89] "kindnet-96trb" [32ea1443-99f0-4e56-99cb-d1ce43dbcb2f] Running
	I0903 23:04:13.462368    1228 system_pods.go:89] "kindnet-vsgwr" [aa24d517-8c6d-4625-bd97-6f7fe1f7f72e] Running
	I0903 23:04:13.462368    1228 system_pods.go:89] "kube-apiserver-ha-270000" [8b258bec-c81d-404f-b217-dccd40799d89] Running
	I0903 23:04:13.462368    1228 system_pods.go:89] "kube-apiserver-ha-270000-m02" [16ba52a6-4dfc-487f-9bc9-65d94e1fffd8] Running
	I0903 23:04:13.462425    1228 system_pods.go:89] "kube-controller-manager-ha-270000" [a695c6ed-2e2f-41ea-a250-9b01b1ae90af] Running
	I0903 23:04:13.462459    1228 system_pods.go:89] "kube-controller-manager-ha-270000-m02" [f39fb141-4af3-4207-8f1c-1ce77b760861] Running
	I0903 23:04:13.462459    1228 system_pods.go:89] "kube-proxy-qkts6" [8e651463-997a-4431-a14c-29557282565f] Running
	I0903 23:04:13.462488    1228 system_pods.go:89] "kube-proxy-t96st" [f609fa93-da46-46a5-ba36-84c291da86a5] Running
	I0903 23:04:13.462510    1228 system_pods.go:89] "kube-scheduler-ha-270000" [a257c6a6-4337-49fd-ba96-c6248221f207] Running
	I0903 23:04:13.462510    1228 system_pods.go:89] "kube-scheduler-ha-270000-m02" [5c49ee66-b613-4b3c-9539-da558d1dd53a] Running
	I0903 23:04:13.462510    1228 system_pods.go:89] "kube-vip-ha-270000" [4a489bea-b3e7-43bd-96e0-58c1480000a4] Running
	I0903 23:04:13.462510    1228 system_pods.go:89] "kube-vip-ha-270000-m02" [163cfde8-7488-49ac-b241-2509a7b01d1b] Running
	I0903 23:04:13.462510    1228 system_pods.go:89] "storage-provisioner" [7643327e-078c-45c9-9a32-cdf3b7a72986] Running
	I0903 23:04:13.462566    1228 system_pods.go:126] duration metric: took 8.2453ms to wait for k8s-apps to be running ...
	I0903 23:04:13.462566    1228 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:04:13.478377    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:04:13.509754    1228 system_svc.go:56] duration metric: took 47.1871ms WaitForService to wait for kubelet
	I0903 23:04:13.509906    1228 kubeadm.go:578] duration metric: took 24.986641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:04:13.509906    1228 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:04:13.516151    1228 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:04:13.516151    1228 node_conditions.go:123] node cpu capacity is 2
	I0903 23:04:13.516151    1228 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:04:13.517172    1228 node_conditions.go:123] node cpu capacity is 2
	I0903 23:04:13.517172    1228 node_conditions.go:105] duration metric: took 7.2658ms to run NodePressure ...
	I0903 23:04:13.517172    1228 start.go:241] waiting for startup goroutines ...
	I0903 23:04:13.517172    1228 start.go:255] writing updated cluster config ...
	I0903 23:04:13.525149    1228 out.go:203] 
	I0903 23:04:13.536143    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:04:13.537142    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 23:04:13.543137    1228 out.go:179] * Starting "ha-270000-m03" control-plane node in "ha-270000" cluster
	I0903 23:04:13.546141    1228 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 23:04:13.546141    1228 cache.go:58] Caching tarball of preloaded images
	I0903 23:04:13.547140    1228 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0903 23:04:13.547140    1228 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0903 23:04:13.547140    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 23:04:13.552141    1228 start.go:360] acquireMachinesLock for ha-270000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:04:13.553161    1228 start.go:364] duration metric: took 1.0198ms to acquireMachinesLock for "ha-270000-m03"
	I0903 23:04:13.553161    1228 start.go:93] Provisioning new machine with config: &{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.120.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:04:13.553161    1228 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0903 23:04:13.556149    1228 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 23:04:13.556149    1228 start.go:159] libmachine.API.Create for "ha-270000" (driver="hyperv")
	I0903 23:04:13.556149    1228 client.go:168] LocalClient.Create starting
	I0903 23:04:13.557175    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0903 23:04:13.557175    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 23:04:13.557175    1228 main.go:141] libmachine: Parsing certificate...
	I0903 23:04:13.557175    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0903 23:04:13.558142    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 23:04:13.558142    1228 main.go:141] libmachine: Parsing certificate...
	I0903 23:04:13.558142    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0903 23:04:15.458396    1228 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0903 23:04:15.458396    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:15.458396    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0903 23:04:17.185387    1228 main.go:141] libmachine: [stdout =====>] : False
	
	I0903 23:04:17.185462    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:17.185462    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 23:04:18.680897    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 23:04:18.680897    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:18.680897    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 23:04:22.388806    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 23:04:22.388871    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:22.391054    1228 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 23:04:23.085163    1228 main.go:141] libmachine: Creating SSH key...
	I0903 23:04:23.461451    1228 main.go:141] libmachine: Creating VM...
	I0903 23:04:23.461451    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 23:04:26.351942    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 23:04:26.351942    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:26.351942    1228 main.go:141] libmachine: Using switch "Default Switch"
	I0903 23:04:26.351942    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 23:04:28.196768    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 23:04:28.196768    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:28.196768    1228 main.go:141] libmachine: Creating VHD
	I0903 23:04:28.197216    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0903 23:04:31.939199    1228 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : BA115B43-14A8-4C03-8065-3CE69285267E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0903 23:04:31.939620    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:31.939702    1228 main.go:141] libmachine: Writing magic tar header
	I0903 23:04:31.939702    1228 main.go:141] libmachine: Writing SSH key tar header
	I0903 23:04:31.952710    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0903 23:04:35.106317    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:35.107044    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:35.107044    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\disk.vhd' -SizeBytes 20000MB
	I0903 23:04:37.603159    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:37.603195    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:37.603258    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-270000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0903 23:04:41.277675    1228 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-270000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0903 23:04:41.277969    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:41.277969    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-270000-m03 -DynamicMemoryEnabled $false
	I0903 23:04:43.473805    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:43.473883    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:43.473945    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-270000-m03 -Count 2
	I0903 23:04:45.607596    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:45.607596    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:45.608052    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-270000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\boot2docker.iso'
	I0903 23:04:48.145327    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:48.145542    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:48.145542    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-270000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\disk.vhd'
	I0903 23:04:50.779285    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:50.779285    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:50.779285    1228 main.go:141] libmachine: Starting VM...
	I0903 23:04:50.779621    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-270000-m03
	I0903 23:04:53.888059    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:53.888316    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:53.891797    1228 main.go:141] libmachine: Waiting for host to start...
	I0903 23:04:53.892119    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:04:56.183737    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:04:56.183737    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:56.183737    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:04:58.765903    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:58.765903    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:59.767590    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:01.978550    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:01.978836    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:01.978836    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:04.506980    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:05:04.506980    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:05.507971    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:07.704073    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:07.704073    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:07.704073    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:10.264880    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:05:10.264880    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:11.265698    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:13.513097    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:13.513859    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:13.514043    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:16.035745    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:05:16.036200    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:17.037160    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:19.211715    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:19.212181    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:19.212312    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:21.890196    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:21.890247    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:21.890295    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:24.081974    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:24.082401    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:24.082401    1228 machine.go:93] provisionDockerMachine start ...
	I0903 23:05:24.082478    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:26.306587    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:26.306587    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:26.306587    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:29.051419    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:29.052412    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:29.058240    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:29.059063    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:05:29.059063    1228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:05:29.209123    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:05:29.209123    1228 buildroot.go:166] provisioning hostname "ha-270000-m03"
	I0903 23:05:29.209232    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:31.344098    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:31.344098    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:31.344528    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:33.888473    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:33.888473    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:33.895696    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:33.895867    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:05:33.895867    1228 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-270000-m03 && echo "ha-270000-m03" | sudo tee /etc/hostname
	I0903 23:05:34.057614    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-270000-m03
	
	I0903 23:05:34.057746    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:36.122549    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:36.123434    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:36.123434    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:38.665402    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:38.666532    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:38.674788    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:38.674788    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:05:38.674788    1228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-270000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-270000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-270000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:05:38.826994    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:05:38.826994    1228 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0903 23:05:38.826994    1228 buildroot.go:174] setting up certificates
	I0903 23:05:38.826994    1228 provision.go:84] configureAuth start
	I0903 23:05:38.827837    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:40.926491    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:40.926580    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:40.926655    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:43.462512    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:43.462512    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:43.462605    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:45.554866    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:45.554866    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:45.555726    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:48.043996    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:48.044741    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:48.044741    1228 provision.go:143] copyHostCerts
	I0903 23:05:48.044972    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0903 23:05:48.045104    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0903 23:05:48.045104    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0903 23:05:48.045630    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0903 23:05:48.046882    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0903 23:05:48.046882    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0903 23:05:48.046882    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0903 23:05:48.047793    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0903 23:05:48.049153    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0903 23:05:48.049183    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0903 23:05:48.049183    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0903 23:05:48.049745    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0903 23:05:48.050544    1228 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-270000-m03 san=[127.0.0.1 172.25.124.104 ha-270000-m03 localhost minikube]
	I0903 23:05:48.545736    1228 provision.go:177] copyRemoteCerts
	I0903 23:05:48.564660    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:05:48.564660    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:50.693480    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:50.693712    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:50.693975    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:53.183071    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:53.183245    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:53.183312    1228 sshutil.go:53] new ssh client: &{IP:172.25.124.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\id_rsa Username:docker}
	I0903 23:05:53.300960    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.736234s)
	I0903 23:05:53.300960    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0903 23:05:53.301777    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:05:53.358470    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0903 23:05:53.358470    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0903 23:05:53.417180    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0903 23:05:53.417373    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:05:53.470867    1228 provision.go:87] duration metric: took 14.6436682s to configureAuth
	I0903 23:05:53.470867    1228 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:05:53.471790    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:05:53.471943    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:55.539366    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:55.539885    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:55.539982    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:58.056561    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:58.056561    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:58.063313    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:58.064116    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:05:58.064116    1228 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0903 23:05:58.199787    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0903 23:05:58.199787    1228 buildroot.go:70] root file system type: tmpfs
	I0903 23:05:58.199787    1228 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0903 23:05:58.200385    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:00.313410    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:00.314421    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:00.314649    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:02.815102    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:02.815276    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:02.820306    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:06:02.821108    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:06:02.821108    1228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.25.116.52"
	Environment="NO_PROXY=172.25.116.52,172.25.120.53"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0903 23:06:02.993633    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.25.116.52
	Environment=NO_PROXY=172.25.116.52,172.25.120.53
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0903 23:06:02.993701    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:05.123743    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:05.123854    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:05.123959    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:07.647440    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:07.647440    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:07.654073    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:06:07.654594    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:06:07.654594    1228 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0903 23:06:09.115301    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0903 23:06:09.115367    1228 machine.go:96] duration metric: took 45.0323378s to provisionDockerMachine
	I0903 23:06:09.115426    1228 client.go:171] duration metric: took 1m55.5576678s to LocalClient.Create
	I0903 23:06:09.115426    1228 start.go:167] duration metric: took 1m55.5576678s to libmachine.API.Create "ha-270000"
	I0903 23:06:09.115426    1228 start.go:293] postStartSetup for "ha-270000-m03" (driver="hyperv")
	I0903 23:06:09.115513    1228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:06:09.129223    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:06:09.129223    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:11.253419    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:11.253419    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:11.254137    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:13.800457    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:13.800457    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:13.801548    1228 sshutil.go:53] new ssh client: &{IP:172.25.124.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\id_rsa Username:docker}
	I0903 23:06:13.907399    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7781095s)
	I0903 23:06:13.920598    1228 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:06:13.928728    1228 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:06:13.928728    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0903 23:06:13.929350    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0903 23:06:13.930876    1228 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0903 23:06:13.930876    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0903 23:06:13.944298    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:06:13.965627    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0903 23:06:14.022524    1228 start.go:296] duration metric: took 4.9069426s for postStartSetup
	I0903 23:06:14.025110    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:16.118829    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:16.118829    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:16.119880    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:18.669287    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:18.669287    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:18.669575    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 23:06:18.672473    1228 start.go:128] duration metric: took 2m5.1175697s to createHost
	I0903 23:06:18.672473    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:20.816404    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:20.816923    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:20.816923    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:23.469795    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:23.469795    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:23.477301    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:06:23.477917    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:06:23.477917    1228 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:06:23.635065    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756940783.657838700
	
	I0903 23:06:23.635065    1228 fix.go:216] guest clock: 1756940783.657838700
	I0903 23:06:23.635065    1228 fix.go:229] Guest: 2025-09-03 23:06:23.6578387 +0000 UTC Remote: 2025-09-03 23:06:18.6724738 +0000 UTC m=+578.210851701 (delta=4.9853649s)
	I0903 23:06:23.635227    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:25.740790    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:25.740790    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:25.741084    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:28.303932    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:28.303932    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:28.310038    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:06:28.310528    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:06:28.310528    1228 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756940783
	I0903 23:06:28.465023    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Sep  3 23:06:23 UTC 2025
	
	I0903 23:06:28.465023    1228 fix.go:236] clock set: Wed Sep  3 23:06:23 UTC 2025
	 (err=<nil>)
	I0903 23:06:28.465023    1228 start.go:83] releasing machines lock for "ha-270000-m03", held for 2m14.9099825s
	I0903 23:06:28.465023    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:30.544444    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:30.545209    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:30.545209    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:33.073920    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:33.074930    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:33.078955    1228 out.go:179] * Found network options:
	I0903 23:06:33.081858    1228 out.go:179]   - NO_PROXY=172.25.116.52,172.25.120.53
	W0903 23:06:33.084584    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	W0903 23:06:33.084584    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	I0903 23:06:33.087373    1228 out.go:179]   - NO_PROXY=172.25.116.52,172.25.120.53
	W0903 23:06:33.090426    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	W0903 23:06:33.090426    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	W0903 23:06:33.092544    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	W0903 23:06:33.092544    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	I0903 23:06:33.094513    1228 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0903 23:06:33.094513    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:33.110530    1228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0903 23:06:33.110530    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:35.293515    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:35.294361    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:35.294417    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:35.312948    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:35.313158    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:35.313236    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:38.046907    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:38.047112    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:38.047623    1228 sshutil.go:53] new ssh client: &{IP:172.25.124.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\id_rsa Username:docker}
	I0903 23:06:38.075069    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:38.075069    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:38.075963    1228 sshutil.go:53] new ssh client: &{IP:172.25.124.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\id_rsa Username:docker}
	I0903 23:06:38.149275    1228 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0386746s)
	W0903 23:06:38.149404    1228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:06:38.164748    1228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:06:38.171734    1228 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0771498s)
	W0903 23:06:38.171801    1228 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0903 23:06:38.207707    1228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:06:38.207707    1228 start.go:495] detecting cgroup driver to use...
	I0903 23:06:38.208062    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:06:38.265557    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0903 23:06:38.302455    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0903 23:06:38.331350    1228 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0903 23:06:38.343619    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W0903 23:06:38.359070    1228 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0903 23:06:38.359132    1228 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0903 23:06:38.384465    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 23:06:38.423162    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0903 23:06:38.458408    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 23:06:38.493305    1228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:06:38.531230    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0903 23:06:38.565909    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0903 23:06:38.600043    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0903 23:06:38.634767    1228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:06:38.652397    1228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:06:38.664265    1228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:06:38.699141    1228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:06:38.730070    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:38.961397    1228 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0903 23:06:39.022475    1228 start.go:495] detecting cgroup driver to use...
	I0903 23:06:39.036942    1228 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0903 23:06:39.077336    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:06:39.116912    1228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:06:39.161254    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:06:39.199212    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 23:06:39.238342    1228 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0903 23:06:39.315341    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 23:06:39.344072    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:06:39.397297    1228 ssh_runner.go:195] Run: which cri-dockerd
	I0903 23:06:39.418001    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0903 23:06:39.440310    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0903 23:06:39.491719    1228 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0903 23:06:39.729269    1228 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0903 23:06:39.947686    1228 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0903 23:06:39.947777    1228 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0903 23:06:40.001627    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 23:06:40.038287    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:40.274902    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 23:06:40.456412    1228 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0903 23:06:40.495410    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:06:40.533384    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 23:06:40.580404    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:40.856229    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 23:06:41.921612    1228 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0652355s)
	I0903 23:06:41.921612    1228 retry.go:31] will retry after 728.305379ms: docker not running
	I0903 23:06:42.664531    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:06:42.703526    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0903 23:06:42.742133    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 23:06:42.777734    1228 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0903 23:06:43.015908    1228 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0903 23:06:43.272453    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:43.519851    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0903 23:06:43.585328    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0903 23:06:43.621896    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:43.859063    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0903 23:06:44.024038    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 23:06:44.056214    1228 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0903 23:06:44.069408    1228 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0903 23:06:44.079721    1228 start.go:563] Will wait 60s for crictl version
	I0903 23:06:44.090863    1228 ssh_runner.go:195] Run: which crictl
	I0903 23:06:44.111500    1228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:06:44.169262    1228 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0903 23:06:44.180309    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 23:06:44.225411    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 23:06:44.259030    1228 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0903 23:06:44.263553    1228 out.go:179]   - env NO_PROXY=172.25.116.52
	I0903 23:06:44.267275    1228 out.go:179]   - env NO_PROXY=172.25.116.52,172.25.120.53
	I0903 23:06:44.269240    1228 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0903 23:06:44.273811    1228 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0903 23:06:44.273811    1228 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0903 23:06:44.273811    1228 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0903 23:06:44.273811    1228 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0903 23:06:44.276853    1228 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0903 23:06:44.276853    1228 ip.go:218] interface addr: 172.25.112.1/20
	I0903 23:06:44.286865    1228 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0903 23:06:44.294965    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:06:44.329518    1228 mustload.go:65] Loading cluster: ha-270000
	I0903 23:06:44.330415    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:06:44.330608    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:06:46.385558    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:46.386143    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:46.386211    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 23:06:46.387029    1228 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000 for IP: 172.25.124.104
	I0903 23:06:46.387029    1228 certs.go:194] generating shared ca certs ...
	I0903 23:06:46.387103    1228 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:06:46.387843    1228 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0903 23:06:46.388159    1228 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0903 23:06:46.388454    1228 certs.go:256] generating profile certs ...
	I0903 23:06:46.389339    1228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.key
	I0903 23:06:46.389527    1228 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.df715e79
	I0903 23:06:46.389629    1228 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.df715e79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.116.52 172.25.120.53 172.25.124.104 172.25.127.254]
	I0903 23:06:46.513919    1228 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.df715e79 ...
	I0903 23:06:46.513919    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.df715e79: {Name:mk94aec58ef12e28df00a53b1ba486364e2a26de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:06:46.514917    1228 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.df715e79 ...
	I0903 23:06:46.514917    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.df715e79: {Name:mke5f3cdb87dd957c6b68c229eb55ba6edd3a6bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:06:46.515919    1228 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.df715e79 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt
	I0903 23:06:46.534781    1228 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.df715e79 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key
	I0903 23:06:46.535675    1228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key
	I0903 23:06:46.535675    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0903 23:06:46.536728    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0903 23:06:46.536728    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0903 23:06:46.536728    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0903 23:06:46.536728    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0903 23:06:46.537374    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0903 23:06:46.537636    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0903 23:06:46.537808    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0903 23:06:46.538008    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0903 23:06:46.538008    1228 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0903 23:06:46.538688    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0903 23:06:46.539248    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0903 23:06:46.539943    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0903 23:06:46.540572    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0903 23:06:46.540805    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0903 23:06:46.541562    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0903 23:06:46.541867    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:06:46.542052    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0903 23:06:46.542253    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:06:48.607056    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:48.607056    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:48.607118    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:51.177294    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 23:06:51.177294    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:51.177422    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 23:06:51.282087    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0903 23:06:51.290105    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0903 23:06:51.333795    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0903 23:06:51.340621    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0903 23:06:51.378055    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0903 23:06:51.386883    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0903 23:06:51.423610    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0903 23:06:51.431448    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0903 23:06:51.471331    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0903 23:06:51.478567    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0903 23:06:51.514108    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0903 23:06:51.524914    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0903 23:06:51.553631    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:06:51.606429    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:06:51.667391    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:06:51.722169    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0903 23:06:51.774885    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0903 23:06:51.833011    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:06:51.887267    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:06:51.945618    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:06:52.001306    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0903 23:06:52.070756    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:06:52.132509    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0903 23:06:52.186367    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0903 23:06:52.227505    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0903 23:06:52.265085    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0903 23:06:52.302624    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0903 23:06:52.349532    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0903 23:06:52.412534    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0903 23:06:52.455565    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0903 23:06:52.518835    1228 ssh_runner.go:195] Run: openssl version
	I0903 23:06:52.543426    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:06:52.589057    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:06:52.597735    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:06:52.610576    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:06:52.643456    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:06:52.688338    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0903 23:06:52.733583    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0903 23:06:52.741947    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0903 23:06:52.754006    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0903 23:06:52.792356    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0903 23:06:52.832776    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0903 23:06:52.869746    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0903 23:06:52.877760    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0903 23:06:52.888883    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0903 23:06:52.922649    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:06:52.957158    1228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:06:52.964865    1228 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:06:52.965197    1228 kubeadm.go:926] updating node {m03 172.25.124.104 8443 v1.34.0 docker true true} ...
	I0903 23:06:52.965272    1228 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-270000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.124.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:06:52.965490    1228 kube-vip.go:115] generating kube-vip config ...
	I0903 23:06:52.977405    1228 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0903 23:06:53.008106    1228 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0903 23:06:53.008254    1228 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0903 23:06:53.022735    1228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:06:53.043509    1228 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0': No such file or directory
	
	Initiating transfer...
	I0903 23:06:53.056539    1228 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0
	I0903 23:06:53.077863    1228 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
	I0903 23:06:53.077915    1228 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet.sha256
	I0903 23:06:53.078032    1228 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm.sha256
	I0903 23:06:53.078032    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl -> /var/lib/minikube/binaries/v1.34.0/kubectl
	I0903 23:06:53.078128    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm -> /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0903 23:06:53.092009    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:06:53.092715    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl
	I0903 23:06:53.093732    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0903 23:06:53.122370    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubectl': No such file or directory
	I0903 23:06:53.122464    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubeadm': No such file or directory
	I0903 23:06:53.122464    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet -> /var/lib/minikube/binaries/v1.34.0/kubelet
	I0903 23:06:53.122464    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl --> /var/lib/minikube/binaries/v1.34.0/kubectl (60559544 bytes)
	I0903 23:06:53.122464    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0/kubeadm (74027192 bytes)
	I0903 23:06:53.137325    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet
	I0903 23:06:53.236721    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubelet': No such file or directory
	I0903 23:06:53.236721    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet --> /var/lib/minikube/binaries/v1.34.0/kubelet (59195684 bytes)
	I0903 23:06:54.426280    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0903 23:06:54.448207    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0903 23:06:54.486880    1228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:06:54.537189    1228 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0903 23:06:54.604187    1228 ssh_runner.go:195] Run: grep 172.25.127.254	control-plane.minikube.internal$ /etc/hosts
	I0903 23:06:54.611011    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:06:54.652270    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:54.906121    1228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:06:54.961940    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 23:06:54.962919    1228 start.go:317] joinCluster: &{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clust
erName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.120.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.25.124.104 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:06:54.962919    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0903 23:06:54.962919    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:06:57.082253    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:57.082253    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:57.082253    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:59.595292    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 23:06:59.595292    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:59.596933    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 23:07:00.006485    1228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0434954s)
	I0903 23:07:00.006605    1228 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.25.124.104 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:07:00.006741    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n21ykb.j8r73csmrpokwkyp --discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-270000-m03 --control-plane --apiserver-advertise-address=172.25.124.104 --apiserver-bind-port=8443"
	I0903 23:07:53.523560    1228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n21ykb.j8r73csmrpokwkyp --discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-270000-m03 --control-plane --apiserver-advertise-address=172.25.124.104 --apiserver-bind-port=8443": (53.5160322s)
	I0903 23:07:53.523560    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0903 23:07:54.245547    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-270000-m03 minikube.k8s.io/updated_at=2025_09_03T23_07_54_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb minikube.k8s.io/name=ha-270000 minikube.k8s.io/primary=false
	I0903 23:07:54.413010    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-270000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0903 23:07:54.563477    1228 start.go:319] duration metric: took 59.5997831s to joinCluster
	I0903 23:07:54.563477    1228 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.25.124.104 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:07:54.563477    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:07:54.567298    1228 out.go:179] * Verifying Kubernetes components...
	I0903 23:07:54.582502    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:07:54.887171    1228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:07:54.922973    1228 kapi.go:59] client config for ha-270000: &rest.Config{Host:"https://172.25.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0903 23:07:54.922973    1228 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.127.254:8443 with https://172.25.116.52:8443
	I0903 23:07:54.924241    1228 node_ready.go:35] waiting up to 6m0s for node "ha-270000-m03" to be "Ready" ...
	W0903 23:07:56.967712    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:07:59.430841    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:01.431397    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:03.431596    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:05.433719    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:07.435053    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:09.930979    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:11.932344    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:14.439563    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:16.931256    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:19.431758    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:21.931336    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:23.931845    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	I0903 23:08:24.432339    1228 node_ready.go:49] node "ha-270000-m03" is "Ready"
	I0903 23:08:24.432339    1228 node_ready.go:38] duration metric: took 29.5076881s for node "ha-270000-m03" to be "Ready" ...
	I0903 23:08:24.432437    1228 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:08:24.444295    1228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:08:24.482653    1228 api_server.go:72] duration metric: took 29.9187608s to wait for apiserver process to appear ...
	I0903 23:08:24.482704    1228 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:08:24.482770    1228 api_server.go:253] Checking apiserver healthz at https://172.25.116.52:8443/healthz ...
	I0903 23:08:24.492894    1228 api_server.go:279] https://172.25.116.52:8443/healthz returned 200:
	ok
	I0903 23:08:24.496626    1228 api_server.go:141] control plane version: v1.34.0
	I0903 23:08:24.496626    1228 api_server.go:131] duration metric: took 13.9216ms to wait for apiserver health ...
	I0903 23:08:24.496626    1228 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:08:24.508610    1228 system_pods.go:59] 24 kube-system pods found
	I0903 23:08:24.508681    1228 system_pods.go:61] "coredns-66bc5c9577-58qw9" [e4c3bec4-9c47-404e-98ff-21e0aee82931] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "coredns-66bc5c9577-cnk8d" [20226b19-1d13-4057-88c1-709997f24868] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "etcd-ha-270000" [bedaa6e6-7109-475b-b96e-34178b2a83e2] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "etcd-ha-270000-m02" [d123ed06-ba3b-4745-a419-0b7720e9e903] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "etcd-ha-270000-m03" [5684b0cc-afb5-415c-9a8d-452523531995] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "kindnet-96trb" [32ea1443-99f0-4e56-99cb-d1ce43dbcb2f] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "kindnet-vsgwr" [aa24d517-8c6d-4625-bd97-6f7fe1f7f72e] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kindnet-wqmlt" [230736de-aaf5-4c9c-9af9-6a4bcc572547] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-apiserver-ha-270000" [8b258bec-c81d-404f-b217-dccd40799d89] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-apiserver-ha-270000-m02" [16ba52a6-4dfc-487f-9bc9-65d94e1fffd8] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-apiserver-ha-270000-m03" [30239ff2-f7a0-4a91-920c-058ee37aee79] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-controller-manager-ha-270000" [a695c6ed-2e2f-41ea-a250-9b01b1ae90af] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-controller-manager-ha-270000-m02" [f39fb141-4af3-4207-8f1c-1ce77b760861] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-controller-manager-ha-270000-m03" [c18582aa-1ead-4403-a412-1cc46100151b] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-proxy-cb8z2" [1b8a13fe-f029-42c2-9241-18cc0213dce2] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-proxy-qkts6" [8e651463-997a-4431-a14c-29557282565f] Running
	I0903 23:08:24.508862    1228 system_pods.go:61] "kube-proxy-t96st" [f609fa93-da46-46a5-ba36-84c291da86a5] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-scheduler-ha-270000" [a257c6a6-4337-49fd-ba96-c6248221f207] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-scheduler-ha-270000-m02" [5c49ee66-b613-4b3c-9539-da558d1dd53a] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-scheduler-ha-270000-m03" [061cecf5-9818-4f99-b6d2-603759814139] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-vip-ha-270000" [4a489bea-b3e7-43bd-96e0-58c1480000a4] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-vip-ha-270000-m02" [163cfde8-7488-49ac-b241-2509a7b01d1b] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-vip-ha-270000-m03" [66b497a0-35c9-470b-b263-bb25c762b83e] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "storage-provisioner" [7643327e-078c-45c9-9a32-cdf3b7a72986] Running
	I0903 23:08:24.508988    1228 system_pods.go:74] duration metric: took 12.3617ms to wait for pod list to return data ...
	I0903 23:08:24.508988    1228 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:08:24.515504    1228 default_sa.go:45] found service account: "default"
	I0903 23:08:24.515504    1228 default_sa.go:55] duration metric: took 6.5162ms for default service account to be created ...
	I0903 23:08:24.515504    1228 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:08:24.541213    1228 system_pods.go:86] 24 kube-system pods found
	I0903 23:08:24.541281    1228 system_pods.go:89] "coredns-66bc5c9577-58qw9" [e4c3bec4-9c47-404e-98ff-21e0aee82931] Running
	I0903 23:08:24.541281    1228 system_pods.go:89] "coredns-66bc5c9577-cnk8d" [20226b19-1d13-4057-88c1-709997f24868] Running
	I0903 23:08:24.541281    1228 system_pods.go:89] "etcd-ha-270000" [bedaa6e6-7109-475b-b96e-34178b2a83e2] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "etcd-ha-270000-m02" [d123ed06-ba3b-4745-a419-0b7720e9e903] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "etcd-ha-270000-m03" [5684b0cc-afb5-415c-9a8d-452523531995] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kindnet-96trb" [32ea1443-99f0-4e56-99cb-d1ce43dbcb2f] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kindnet-vsgwr" [aa24d517-8c6d-4625-bd97-6f7fe1f7f72e] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kindnet-wqmlt" [230736de-aaf5-4c9c-9af9-6a4bcc572547] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kube-apiserver-ha-270000" [8b258bec-c81d-404f-b217-dccd40799d89] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kube-apiserver-ha-270000-m02" [16ba52a6-4dfc-487f-9bc9-65d94e1fffd8] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kube-apiserver-ha-270000-m03" [30239ff2-f7a0-4a91-920c-058ee37aee79] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kube-controller-manager-ha-270000" [a695c6ed-2e2f-41ea-a250-9b01b1ae90af] Running
	I0903 23:08:24.541450    1228 system_pods.go:89] "kube-controller-manager-ha-270000-m02" [f39fb141-4af3-4207-8f1c-1ce77b760861] Running
	I0903 23:08:24.541481    1228 system_pods.go:89] "kube-controller-manager-ha-270000-m03" [c18582aa-1ead-4403-a412-1cc46100151b] Running
	I0903 23:08:24.541481    1228 system_pods.go:89] "kube-proxy-cb8z2" [1b8a13fe-f029-42c2-9241-18cc0213dce2] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-proxy-qkts6" [8e651463-997a-4431-a14c-29557282565f] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-proxy-t96st" [f609fa93-da46-46a5-ba36-84c291da86a5] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-scheduler-ha-270000" [a257c6a6-4337-49fd-ba96-c6248221f207] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-scheduler-ha-270000-m02" [5c49ee66-b613-4b3c-9539-da558d1dd53a] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-scheduler-ha-270000-m03" [061cecf5-9818-4f99-b6d2-603759814139] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-vip-ha-270000" [4a489bea-b3e7-43bd-96e0-58c1480000a4] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-vip-ha-270000-m02" [163cfde8-7488-49ac-b241-2509a7b01d1b] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-vip-ha-270000-m03" [66b497a0-35c9-470b-b263-bb25c762b83e] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "storage-provisioner" [7643327e-078c-45c9-9a32-cdf3b7a72986] Running
	I0903 23:08:24.541507    1228 system_pods.go:126] duration metric: took 26.003ms to wait for k8s-apps to be running ...
	I0903 23:08:24.541507    1228 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:08:24.552989    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:08:24.587108    1228 system_svc.go:56] duration metric: took 45.5997ms WaitForService to wait for kubelet
	I0903 23:08:24.587108    1228 kubeadm.go:578] duration metric: took 30.0232135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:08:24.587297    1228 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:08:24.595438    1228 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:08:24.595438    1228 node_conditions.go:123] node cpu capacity is 2
	I0903 23:08:24.595438    1228 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:08:24.595438    1228 node_conditions.go:123] node cpu capacity is 2
	I0903 23:08:24.595438    1228 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:08:24.595438    1228 node_conditions.go:123] node cpu capacity is 2
	I0903 23:08:24.595438    1228 node_conditions.go:105] duration metric: took 8.1409ms to run NodePressure ...
	I0903 23:08:24.595438    1228 start.go:241] waiting for startup goroutines ...
	I0903 23:08:24.596015    1228 start.go:255] writing updated cluster config ...
	I0903 23:08:24.609564    1228 ssh_runner.go:195] Run: rm -f paused
	I0903 23:08:24.617784    1228 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:08:24.619208    1228 kapi.go:59] client config for ha-270000: &rest.Config{Host:"https://172.25.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0903 23:08:24.641304    1228 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-58qw9" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.652199    1228 pod_ready.go:94] pod "coredns-66bc5c9577-58qw9" is "Ready"
	I0903 23:08:24.652199    1228 pod_ready.go:86] duration metric: took 10.8957ms for pod "coredns-66bc5c9577-58qw9" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.652199    1228 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cnk8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.662823    1228 pod_ready.go:94] pod "coredns-66bc5c9577-cnk8d" is "Ready"
	I0903 23:08:24.662892    1228 pod_ready.go:86] duration metric: took 10.6233ms for pod "coredns-66bc5c9577-cnk8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.670153    1228 pod_ready.go:83] waiting for pod "etcd-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.680168    1228 pod_ready.go:94] pod "etcd-ha-270000" is "Ready"
	I0903 23:08:24.680168    1228 pod_ready.go:86] duration metric: took 10.015ms for pod "etcd-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.680168    1228 pod_ready.go:83] waiting for pod "etcd-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.688832    1228 pod_ready.go:94] pod "etcd-ha-270000-m02" is "Ready"
	I0903 23:08:24.688832    1228 pod_ready.go:86] duration metric: took 8.6637ms for pod "etcd-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.688832    1228 pod_ready.go:83] waiting for pod "etcd-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.821448    1228 request.go:683] "Waited before sending request" delay="132.6148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-270000-m03"
	I0903 23:08:25.021242    1228 request.go:683] "Waited before sending request" delay="193.6485ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m03"
	I0903 23:08:25.031181    1228 pod_ready.go:94] pod "etcd-ha-270000-m03" is "Ready"
	I0903 23:08:25.031240    1228 pod_ready.go:86] duration metric: took 342.4039ms for pod "etcd-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:25.221160    1228 request.go:683] "Waited before sending request" delay="189.8598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0903 23:08:25.229439    1228 pod_ready.go:83] waiting for pod "kube-apiserver-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:25.420782    1228 request.go:683] "Waited before sending request" delay="191.2094ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-270000"
	I0903 23:08:25.620766    1228 request.go:683] "Waited before sending request" delay="193.9519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000"
	I0903 23:08:25.627063    1228 pod_ready.go:94] pod "kube-apiserver-ha-270000" is "Ready"
	I0903 23:08:25.627063    1228 pod_ready.go:86] duration metric: took 397.5425ms for pod "kube-apiserver-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:25.627676    1228 pod_ready.go:83] waiting for pod "kube-apiserver-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:25.820766    1228 request.go:683] "Waited before sending request" delay="192.6908ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-270000-m02"
	I0903 23:08:26.021536    1228 request.go:683] "Waited before sending request" delay="189.2531ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m02"
	I0903 23:08:26.027132    1228 pod_ready.go:94] pod "kube-apiserver-ha-270000-m02" is "Ready"
	I0903 23:08:26.027132    1228 pod_ready.go:86] duration metric: took 399.1179ms for pod "kube-apiserver-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:26.027132    1228 pod_ready.go:83] waiting for pod "kube-apiserver-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:26.221246    1228 request.go:683] "Waited before sending request" delay="194.1117ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-270000-m03"
	I0903 23:08:26.420827    1228 request.go:683] "Waited before sending request" delay="192.3747ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m03"
	I0903 23:08:26.427893    1228 pod_ready.go:94] pod "kube-apiserver-ha-270000-m03" is "Ready"
	I0903 23:08:26.427946    1228 pod_ready.go:86] duration metric: took 400.8091ms for pod "kube-apiserver-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:26.621410    1228 request.go:683] "Waited before sending request" delay="193.3496ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0903 23:08:26.632244    1228 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:26.820890    1228 request.go:683] "Waited before sending request" delay="188.5357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-270000"
	I0903 23:08:27.021312    1228 request.go:683] "Waited before sending request" delay="193.81ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000"
	I0903 23:08:27.027646    1228 pod_ready.go:94] pod "kube-controller-manager-ha-270000" is "Ready"
	I0903 23:08:27.027646    1228 pod_ready.go:86] duration metric: took 395.3431ms for pod "kube-controller-manager-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:27.027646    1228 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:27.221499    1228 request.go:683] "Waited before sending request" delay="193.8499ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-270000-m02"
	I0903 23:08:27.421447    1228 request.go:683] "Waited before sending request" delay="193.692ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m02"
	I0903 23:08:27.426969    1228 pod_ready.go:94] pod "kube-controller-manager-ha-270000-m02" is "Ready"
	I0903 23:08:27.427059    1228 pod_ready.go:86] duration metric: took 399.4068ms for pod "kube-controller-manager-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:27.427059    1228 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:27.621139    1228 request.go:683] "Waited before sending request" delay="194.0775ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-270000-m03"
	I0903 23:08:27.821591    1228 request.go:683] "Waited before sending request" delay="192.9871ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m03"
	I0903 23:08:27.828155    1228 pod_ready.go:94] pod "kube-controller-manager-ha-270000-m03" is "Ready"
	I0903 23:08:27.828155    1228 pod_ready.go:86] duration metric: took 401.0908ms for pod "kube-controller-manager-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:28.020724    1228 request.go:683] "Waited before sending request" delay="191.839ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0903 23:08:28.029278    1228 pod_ready.go:83] waiting for pod "kube-proxy-cb8z2" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:28.221184    1228 request.go:683] "Waited before sending request" delay="191.9039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cb8z2"
	I0903 23:08:28.420703    1228 request.go:683] "Waited before sending request" delay="193.5347ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m03"
	I0903 23:08:28.426633    1228 pod_ready.go:94] pod "kube-proxy-cb8z2" is "Ready"
	I0903 23:08:28.427169    1228 pod_ready.go:86] duration metric: took 397.8862ms for pod "kube-proxy-cb8z2" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:28.427169    1228 pod_ready.go:83] waiting for pod "kube-proxy-qkts6" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:28.620674    1228 request.go:683] "Waited before sending request" delay="193.3011ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qkts6"
	I0903 23:08:28.821157    1228 request.go:683] "Waited before sending request" delay="194.5718ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m02"
	I0903 23:08:28.828052    1228 pod_ready.go:94] pod "kube-proxy-qkts6" is "Ready"
	I0903 23:08:28.828052    1228 pod_ready.go:86] duration metric: took 400.8773ms for pod "kube-proxy-qkts6" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:28.828052    1228 pod_ready.go:83] waiting for pod "kube-proxy-t96st" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:29.021450    1228 request.go:683] "Waited before sending request" delay="193.1719ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t96st"
	I0903 23:08:29.222419    1228 request.go:683] "Waited before sending request" delay="193.3362ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000"
	I0903 23:08:29.227960    1228 pod_ready.go:94] pod "kube-proxy-t96st" is "Ready"
	I0903 23:08:29.227960    1228 pod_ready.go:86] duration metric: took 399.9026ms for pod "kube-proxy-t96st" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:29.421259    1228 request.go:683] "Waited before sending request" delay="193.1318ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0903 23:08:29.430996    1228 pod_ready.go:83] waiting for pod "kube-scheduler-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:29.621651    1228 request.go:683] "Waited before sending request" delay="190.5024ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-270000"
	I0903 23:08:29.821140    1228 request.go:683] "Waited before sending request" delay="194.3559ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000"
	I0903 23:08:29.827173    1228 pod_ready.go:94] pod "kube-scheduler-ha-270000" is "Ready"
	I0903 23:08:29.827225    1228 pod_ready.go:86] duration metric: took 396.1718ms for pod "kube-scheduler-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:29.827225    1228 pod_ready.go:83] waiting for pod "kube-scheduler-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:30.021745    1228 request.go:683] "Waited before sending request" delay="194.5167ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-270000-m02"
	I0903 23:08:30.220851    1228 request.go:683] "Waited before sending request" delay="191.5353ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m02"
	I0903 23:08:30.226963    1228 pod_ready.go:94] pod "kube-scheduler-ha-270000-m02" is "Ready"
	I0903 23:08:30.226963    1228 pod_ready.go:86] duration metric: took 399.7316ms for pod "kube-scheduler-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:30.226963    1228 pod_ready.go:83] waiting for pod "kube-scheduler-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:30.420761    1228 request.go:683] "Waited before sending request" delay="193.6898ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-270000-m03"
	I0903 23:08:30.621262    1228 request.go:683] "Waited before sending request" delay="194.3443ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m03"
	I0903 23:08:30.631859    1228 pod_ready.go:94] pod "kube-scheduler-ha-270000-m03" is "Ready"
	I0903 23:08:30.631951    1228 pod_ready.go:86] duration metric: took 404.8762ms for pod "kube-scheduler-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:30.631951    1228 pod_ready.go:40] duration metric: took 6.0140833s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:08:30.774051    1228 start.go:617] kubectl: 1.34.0, cluster: 1.34.0 (minor skew: 0)
	I0903 23:08:30.778556    1228 out.go:179] * Done! kubectl is now configured to use "ha-270000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.340853786Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count b815c8341521d784335e2ba21604b2414c9e730e154cf870398d1b8c474f33aa], retrying...."
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.438738576Z" level=info msg="Loading containers: done."
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.460709775Z" level=info msg="Docker daemon" commit=e77ff99 containerd-snapshotter=false storage-driver=overlay2 version=28.3.2
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.460777676Z" level=info msg="Initializing buildkit"
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.495790694Z" level=info msg="Completed buildkit initialization"
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.511119834Z" level=info msg="Daemon has completed initialization"
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.511168434Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.511361936Z" level=info msg="API listen on /run/docker.sock"
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.511425737Z" level=info msg="API listen on [::]:2376"
	Sep 03 22:59:23 ha-270000 systemd[1]: Started Docker Application Container Engine.
	Sep 03 22:59:34 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90329fcf36cd0f912716cea1751c86422190bed362ff1c040970598366a259c2/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:34 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9a3da56bb72c938aa1f38c595aee13d2464f856c5e46cdf558aec6d1a862db23/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:34 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6bf910dadd391c2be6ded43b28e91e0547975fb132fdd33e1d7c9b17b2d84a3b/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:34 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a4dd9e65d7d273637dc3367e6beeea47b9e1c094a4cc81fff90d528c28feba04/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:34 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0b16264b4dd567bea1f101a9a6fdd98d72e0fe7e4e47a9de8397547ea6cc3912/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:37 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:37Z" level=info msg="Stop pulling image ghcr.io/kube-vip/kube-vip:v1.0.0: Status: Downloaded newer image for ghcr.io/kube-vip/kube-vip:v1.0.0"
	Sep 03 22:59:48 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 03 22:59:50 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/513be39f65dea0bdfdf72a9db2617cb17253abdd890e152086c2e07560f9850b/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:50 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/14155ffe05146e4c150dfdd56e7ccbd470fdd08c24940763f0ce633cc7d9ca72/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:57 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:57Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 03 23:00:12 ha-270000 cri-dockerd[1921]: time="2025-09-03T23:00:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1a846b8a99e7701086644b9e4b501865d72ddc25ed73eb3c13ec9c4c8f0a426/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 23:00:12 ha-270000 cri-dockerd[1921]: time="2025-09-03T23:00:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5ab48725316b35b7cbfbe34ed1b7ef8ff490e2c9aab4bc6046ac062d6cd592c/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 23:00:12 ha-270000 cri-dockerd[1921]: time="2025-09-03T23:00:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5667c542f41f834bccd4227fef98bf0c3102aa8be800e12cbca9ed319d69cd70/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 23:09:09 ha-270000 cri-dockerd[1921]: time="2025-09-03T23:09:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f54f411a35779176c9319b737fbe697ae2872af4162be6251aa352a81a0471d0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 03 23:09:11 ha-270000 cri-dockerd[1921]: time="2025-09-03T23:09:11Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2b2d73adb2f15       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   f54f411a35779       busybox-7b57f96db7-lxhhz
	4ea445ef36026       52546a367cc9e                                                                                         10 minutes ago       Running             coredns                   0                   5667c542f41f8       coredns-66bc5c9577-cnk8d
	39d49eaefc29e       52546a367cc9e                                                                                         10 minutes ago       Running             coredns                   0                   c1a846b8a99e7       coredns-66bc5c9577-58qw9
	afc6e3d43fb6c       6e38f40d628db                                                                                         10 minutes ago       Running             storage-provisioner       0                   b5ab48725316b       storage-provisioner
	1aed5b11fdcd8       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              10 minutes ago       Running             kindnet-cni               0                   14155ffe05146       kindnet-96trb
	faad83036df83       df0860106674d                                                                                         10 minutes ago       Running             kube-proxy                0                   513be39f65dea       kube-proxy-t96st
	9b02f8b78eee0       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     10 minutes ago       Running             kube-vip                  0                   90329fcf36cd0       kube-vip-ha-270000
	5227167cf7b2c       46169d968e920                                                                                         10 minutes ago       Running             kube-scheduler            0                   0b16264b4dd56       kube-scheduler-ha-270000
	9f44f2bbeacca       5f1f5298c888d                                                                                         10 minutes ago       Running             etcd                      0                   a4dd9e65d7d27       etcd-ha-270000
	7f593816c5b60       a0af72f2ec6d6                                                                                         10 minutes ago       Running             kube-controller-manager   0                   6bf910dadd391       kube-controller-manager-ha-270000
	33fa1cad16779       90550c43ad2bc                                                                                         10 minutes ago       Running             kube-apiserver            0                   9a3da56bb72c9       kube-apiserver-ha-270000
	
	
	==> coredns [39d49eaefc29] <==
	[INFO] 10.244.2.2:54702 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196002s
	[INFO] 10.244.2.2:35992 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194403s
	[INFO] 10.244.1.2:50567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154002s
	[INFO] 10.244.1.2:54999 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000085201s
	[INFO] 10.244.1.2:53722 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174003s
	[INFO] 10.244.1.2:36245 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112502s
	[INFO] 10.244.0.4:56323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000659108s
	[INFO] 10.244.0.4:50146 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016604612s
	[INFO] 10.244.0.4:43817 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000303604s
	[INFO] 10.244.0.4:46846 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000126002s
	[INFO] 10.244.0.4:44316 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000307304s
	[INFO] 10.244.0.4:52546 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210102s
	[INFO] 10.244.0.4:36032 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147302s
	[INFO] 10.244.2.2:34527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252504s
	[INFO] 10.244.1.2:47369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169502s
	[INFO] 10.244.1.2:60919 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000339705s
	[INFO] 10.244.1.2:52619 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166602s
	[INFO] 10.244.1.2:57454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064101s
	[INFO] 10.244.0.4:34556 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241403s
	[INFO] 10.244.2.2:33501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000297104s
	[INFO] 10.244.2.2:49833 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100801s
	[INFO] 10.244.2.2:45636 - 5 "PTR IN 1.112.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000209903s
	[INFO] 10.244.1.2:53110 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000322804s
	[INFO] 10.244.1.2:40341 - 5 "PTR IN 1.112.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000184102s
	[INFO] 10.244.0.4:47011 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080701s
	
	
	==> coredns [4ea445ef3602] <==
	[INFO] 10.244.2.2:57620 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd 124 0.181580921s
	[INFO] 10.244.1.2:33948 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003741948s
	[INFO] 10.244.1.2:33516 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 89 0.001293416s
	[INFO] 10.244.0.4:39698 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 126 0.000207603s
	[INFO] 10.244.0.4:53955 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 124 0.000087401s
	[INFO] 10.244.2.2:53515 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032336012s
	[INFO] 10.244.2.2:49443 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205803s
	[INFO] 10.244.2.2:43376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159102s
	[INFO] 10.244.1.2:42079 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000429105s
	[INFO] 10.244.1.2:33994 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074101s
	[INFO] 10.244.1.2:53427 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000296504s
	[INFO] 10.244.1.2:58071 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000289403s
	[INFO] 10.244.0.4:41062 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192502s
	[INFO] 10.244.2.2:60168 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249103s
	[INFO] 10.244.2.2:55216 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240103s
	[INFO] 10.244.2.2:43311 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080101s
	[INFO] 10.244.0.4:39601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326604s
	[INFO] 10.244.0.4:50681 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139702s
	[INFO] 10.244.0.4:41448 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121302s
	[INFO] 10.244.2.2:44725 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117502s
	[INFO] 10.244.1.2:45944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118701s
	[INFO] 10.244.1.2:44094 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000950212s
	[INFO] 10.244.0.4:46361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252203s
	[INFO] 10.244.0.4:48916 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094101s
	[INFO] 10.244.0.4:45915 - 5 "PTR IN 1.112.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122901s
	
	
	==> describe nodes <==
	Name:               ha-270000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-270000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=ha-270000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_03T22_59_45_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Sep 2025 22:59:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-270000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Sep 2025 23:10:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Sep 2025 23:09:24 +0000   Wed, 03 Sep 2025 22:59:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Sep 2025 23:09:24 +0000   Wed, 03 Sep 2025 22:59:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Sep 2025 23:09:24 +0000   Wed, 03 Sep 2025 22:59:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Sep 2025 23:09:24 +0000   Wed, 03 Sep 2025 23:00:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.116.52
	  Hostname:    ha-270000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac512743579d4a1595cd8eeb12593efb
	  System UUID:                19a5aee7-0b11-eb4e-892b-911233248f7e
	  Boot ID:                    5dec2aa3-6ec6-413a-8333-c7300633f796
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lxhhz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 coredns-66bc5c9577-58qw9             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     10m
	  kube-system                 coredns-66bc5c9577-cnk8d             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     10m
	  kube-system                 etcd-ha-270000                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         10m
	  kube-system                 kindnet-96trb                        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      10m
	  kube-system                 kube-apiserver-ha-270000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-270000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-t96st                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-270000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-270000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (9%)  390Mi (13%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-270000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-270000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-270000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node ha-270000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node ha-270000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node ha-270000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node ha-270000 event: Registered Node ha-270000 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-270000 status is now: NodeReady
	  Normal  RegisteredNode           6m33s              node-controller  Node ha-270000 event: Registered Node ha-270000 in Controller
	  Normal  RegisteredNode           2m27s              node-controller  Node ha-270000 event: Registered Node ha-270000 in Controller
	
	
	Name:               ha-270000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-270000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=ha-270000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_03T23_03_48_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Sep 2025 23:03:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-270000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Sep 2025 23:10:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Sep 2025 23:09:34 +0000   Wed, 03 Sep 2025 23:03:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Sep 2025 23:09:34 +0000   Wed, 03 Sep 2025 23:03:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Sep 2025 23:09:34 +0000   Wed, 03 Sep 2025 23:03:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Sep 2025 23:09:34 +0000   Wed, 03 Sep 2025 23:04:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.120.53
	  Hostname:    ha-270000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 024b94a46799424c84c7081b9a292387
	  System UUID:                31707a6e-1c2d-984a-a6d3-0674b15d2706
	  Boot ID:                    31d26a98-3b58-44f3-a168-e2d96656d476
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c6z29                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 etcd-ha-270000-m02                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         6m21s
	  kube-system                 kindnet-vsgwr                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      6m26s
	  kube-system                 kube-apiserver-ha-270000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-controller-manager-ha-270000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-qkts6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-scheduler-ha-270000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-vip-ha-270000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m19s  kube-proxy       
	  Normal  RegisteredNode  6m25s  node-controller  Node ha-270000-m02 event: Registered Node ha-270000-m02 in Controller
	  Normal  RegisteredNode  6m23s  node-controller  Node ha-270000-m02 event: Registered Node ha-270000-m02 in Controller
	  Normal  RegisteredNode  2m27s  node-controller  Node ha-270000-m02 event: Registered Node ha-270000-m02 in Controller
	
	
	Name:               ha-270000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-270000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=ha-270000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_03T23_07_54_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Sep 2025 23:07:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-270000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Sep 2025 23:10:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Sep 2025 23:09:45 +0000   Wed, 03 Sep 2025 23:07:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Sep 2025 23:09:45 +0000   Wed, 03 Sep 2025 23:07:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Sep 2025 23:09:45 +0000   Wed, 03 Sep 2025 23:07:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Sep 2025 23:09:45 +0000   Wed, 03 Sep 2025 23:08:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.124.104
	  Hostname:    ha-270000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c41329947b84b6ba0c0c88ad46c0ca9
	  System UUID:                a8bd4d02-c4f0-2149-98e7-f240fc6aa90c
	  Boot ID:                    e76f92a8-1f48-48bb-8d14-b86184e2d0d1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5cfq2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 etcd-ha-270000-m03                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m13s
	  kube-system                 kindnet-wqmlt                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      2m20s
	  kube-system                 kube-apiserver-ha-270000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-controller-manager-ha-270000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-proxy-cb8z2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-ha-270000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-vip-ha-270000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        2m11s  kube-proxy       
	  Normal  RegisteredNode  2m20s  node-controller  Node ha-270000-m03 event: Registered Node ha-270000-m03 in Controller
	  Normal  RegisteredNode  2m18s  node-controller  Node ha-270000-m03 event: Registered Node ha-270000-m03 in Controller
	  Normal  RegisteredNode  2m17s  node-controller  Node ha-270000-m03 event: Registered Node ha-270000-m03 in Controller
	
	
	==> dmesg <==
	[Sep 3 22:57] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +0.002271] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.000009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.002272] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +0.665501] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.000056] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002869] (rpcbind)[114]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.622383] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 3 22:59] kauditd_printk_skb: 96 callbacks suppressed
	[  +0.187594] kauditd_printk_skb: 396 callbacks suppressed
	[  +0.185708] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.149384] kauditd_printk_skb: 193 callbacks suppressed
	[  +6.035020] kauditd_printk_skb: 174 callbacks suppressed
	[  +0.209097] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.886303] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.389456] kauditd_printk_skb: 107 callbacks suppressed
	[Sep 3 23:00] kauditd_printk_skb: 17 callbacks suppressed
	[Sep 3 23:03] kauditd_printk_skb: 92 callbacks suppressed
	[Sep 3 23:09] hrtimer: interrupt took 1369018 ns
	
	
	==> etcd [9f44f2bbeacc] <==
	{"level":"warn","ts":"2025-09-03T23:07:42.018064Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a11b07e662660865","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"7.002013ms"}
	{"level":"warn","ts":"2025-09-03T23:07:42.018197Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"5ec5d9f85793fb82","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"7.120715ms"}
	{"level":"info","ts":"2025-09-03T23:07:53.420671Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"warn","ts":"2025-09-03T23:07:53.493329Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.668202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-dvhh8\" limit:1 ","response":"range_response_count:1 size:4073"}
	{"level":"info","ts":"2025-09-03T23:07:53.493531Z","caller":"traceutil/trace.go:172","msg":"trace[2025999834] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-dvhh8; range_end:; response_count:1; response_revision:1524; }","duration":"115.877205ms","start":"2025-09-03T23:07:53.377643Z","end":"2025-09-03T23:07:53.493520Z","steps":["trace[2025999834] 'agreement among raft nodes before linearized reading'  (duration: 115.5568ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T23:07:53.542128Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.802707ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-03T23:07:53.542273Z","caller":"traceutil/trace.go:172","msg":"trace[903285297] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1531; }","duration":"108.99791ms","start":"2025-09-03T23:07:53.433258Z","end":"2025-09-03T23:07:53.542256Z","steps":["trace[903285297] 'agreement among raft nodes before linearized reading'  (duration: 108.781307ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-03T23:08:00.964777Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-03T23:08:07.190268Z","caller":"traceutil/trace.go:172","msg":"trace[1492582292] transaction","detail":"{read_only:false; response_revision:1630; number_of_response:1; }","duration":"201.118439ms","start":"2025-09-03T23:08:06.988692Z","end":"2025-09-03T23:08:07.189810Z","steps":["trace[1492582292] 'process raft request'  (duration: 200.988137ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T23:08:09.121804Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a11b07e662660865","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"2.012146ms"}
	{"level":"warn","ts":"2025-09-03T23:08:09.121897Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"5ec5d9f85793fb82","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"2.108947ms"}
	{"level":"info","ts":"2025-09-03T23:08:09.124041Z","caller":"traceutil/trace.go:172","msg":"trace[1114748292] linearizableReadLoop","detail":"{readStateIndex:1873; appliedIndex:1874; }","duration":"167.06707ms","start":"2025-09-03T23:08:08.956944Z","end":"2025-09-03T23:08:09.124011Z","steps":["trace[1114748292] 'read index received'  (duration: 167.06257ms)","trace[1114748292] 'applied index is now lower than readState.Index'  (duration: 3.8µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-03T23:08:09.124344Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.382574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-270000-m03\" limit:1 ","response":"range_response_count:1 size:4282"}
	{"level":"info","ts":"2025-09-03T23:08:09.124465Z","caller":"traceutil/trace.go:172","msg":"trace[184073259] range","detail":"{range_begin:/registry/minions/ha-270000-m03; range_end:; response_count:1; response_revision:1633; }","duration":"167.516776ms","start":"2025-09-03T23:08:08.956939Z","end":"2025-09-03T23:08:09.124456Z","steps":["trace[184073259] 'agreement among raft nodes before linearized reading'  (duration: 167.249172ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-03T23:08:10.070887Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"234c625b0f0adc4e","to":"a11b07e662660865","bytes":2343458,"size":"2.3 MB","took":"30.295724366s"}
	{"level":"info","ts":"2025-09-03T23:09:08.196121Z","caller":"traceutil/trace.go:172","msg":"trace[605789265] transaction","detail":"{read_only:false; response_revision:1798; number_of_response:1; }","duration":"114.696274ms","start":"2025-09-03T23:09:08.081408Z","end":"2025-09-03T23:09:08.196105Z","steps":["trace[605789265] 'process raft request'  (duration: 113.099353ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-03T23:09:08.208994Z","caller":"traceutil/trace.go:172","msg":"trace[576266189] transaction","detail":"{read_only:false; response_revision:1799; number_of_response:1; }","duration":"125.645514ms","start":"2025-09-03T23:09:08.083335Z","end":"2025-09-03T23:09:08.208981Z","steps":["trace[576266189] 'process raft request'  (duration: 112.761149ms)","trace[576266189] 'compare'  (duration: 11.127842ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-03T23:09:08.392686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.364218ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/default/busybox-7b57f96db7\" limit:1 ","response":"range_response_count:1 size:1973"}
	{"level":"info","ts":"2025-09-03T23:09:08.392893Z","caller":"traceutil/trace.go:172","msg":"trace[1741393691] range","detail":"{range_begin:/registry/replicasets/default/busybox-7b57f96db7; range_end:; response_count:1; response_revision:1814; }","duration":"110.581321ms","start":"2025-09-03T23:09:08.282300Z","end":"2025-09-03T23:09:08.392881Z","steps":["trace[1741393691] 'agreement among raft nodes before linearized reading'  (duration: 110.231716ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-03T23:09:08.427407Z","caller":"traceutil/trace.go:172","msg":"trace[1523419343] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1815; }","duration":"101.998711ms","start":"2025-09-03T23:09:08.323187Z","end":"2025-09-03T23:09:08.425185Z","steps":["trace[1523419343] 'process raft request'  (duration: 101.882009ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T23:09:08.532658Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.727407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/busybox\" limit:1 ","response":"range_response_count:1 size:3069"}
	{"level":"info","ts":"2025-09-03T23:09:08.532736Z","caller":"traceutil/trace.go:172","msg":"trace[686887923] range","detail":"{range_begin:/registry/deployments/default/busybox; range_end:; response_count:1; response_revision:1826; }","duration":"103.315228ms","start":"2025-09-03T23:09:08.429410Z","end":"2025-09-03T23:09:08.532725Z","steps":["trace[686887923] 'agreement among raft nodes before linearized reading'  (duration: 95.312225ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-03T23:09:37.942651Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1082}
	{"level":"info","ts":"2025-09-03T23:09:38.011432Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1082,"took":"68.051557ms","hash":2439328660,"current-db-size-bytes":3756032,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":2207744,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-09-03T23:09:38.011545Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2439328660,"revision":1082,"compact-revision":-1}
	
	
	==> kernel <==
	 23:10:13 up 12 min,  0 users,  load average: 0.59, 0.52, 0.31
	Linux ha-270000 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [1aed5b11fdcd] <==
	I0903 23:09:29.222954       1 main.go:324] Node ha-270000-m03 has CIDR [10.244.2.0/24] 
	I0903 23:09:39.218511       1 main.go:297] Handling node with IPs: map[172.25.116.52:{}]
	I0903 23:09:39.218683       1 main.go:301] handling current node
	I0903 23:09:39.218704       1 main.go:297] Handling node with IPs: map[172.25.120.53:{}]
	I0903 23:09:39.218718       1 main.go:324] Node ha-270000-m02 has CIDR [10.244.1.0/24] 
	I0903 23:09:39.219213       1 main.go:297] Handling node with IPs: map[172.25.124.104:{}]
	I0903 23:09:39.219230       1 main.go:324] Node ha-270000-m03 has CIDR [10.244.2.0/24] 
	I0903 23:09:49.218517       1 main.go:297] Handling node with IPs: map[172.25.116.52:{}]
	I0903 23:09:49.218643       1 main.go:301] handling current node
	I0903 23:09:49.218660       1 main.go:297] Handling node with IPs: map[172.25.120.53:{}]
	I0903 23:09:49.218667       1 main.go:324] Node ha-270000-m02 has CIDR [10.244.1.0/24] 
	I0903 23:09:49.219007       1 main.go:297] Handling node with IPs: map[172.25.124.104:{}]
	I0903 23:09:49.219042       1 main.go:324] Node ha-270000-m03 has CIDR [10.244.2.0/24] 
	I0903 23:09:59.222761       1 main.go:297] Handling node with IPs: map[172.25.116.52:{}]
	I0903 23:09:59.222799       1 main.go:301] handling current node
	I0903 23:09:59.222817       1 main.go:297] Handling node with IPs: map[172.25.120.53:{}]
	I0903 23:09:59.222823       1 main.go:324] Node ha-270000-m02 has CIDR [10.244.1.0/24] 
	I0903 23:09:59.223564       1 main.go:297] Handling node with IPs: map[172.25.124.104:{}]
	I0903 23:09:59.223646       1 main.go:324] Node ha-270000-m03 has CIDR [10.244.2.0/24] 
	I0903 23:10:09.218817       1 main.go:297] Handling node with IPs: map[172.25.116.52:{}]
	I0903 23:10:09.218943       1 main.go:301] handling current node
	I0903 23:10:09.218964       1 main.go:297] Handling node with IPs: map[172.25.120.53:{}]
	I0903 23:10:09.218972       1 main.go:324] Node ha-270000-m02 has CIDR [10.244.1.0/24] 
	I0903 23:10:09.219295       1 main.go:297] Handling node with IPs: map[172.25.124.104:{}]
	I0903 23:10:09.219379       1 main.go:324] Node ha-270000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [33fa1cad1677] <==
	I0903 23:04:47.964438       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:05:25.375246       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:05:58.004727       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:06:40.405454       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:07:25.513902       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:08:06.502015       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:08:52.625205       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0903 23:09:15.499869       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60053: use of closed network connection
	E0903 23:09:16.016558       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60055: use of closed network connection
	E0903 23:09:16.562411       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60057: use of closed network connection
	E0903 23:09:17.142007       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60059: use of closed network connection
	E0903 23:09:17.692460       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60061: use of closed network connection
	E0903 23:09:18.226061       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60063: use of closed network connection
	E0903 23:09:18.752271       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60065: use of closed network connection
	E0903 23:09:19.255482       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60067: use of closed network connection
	E0903 23:09:19.766123       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60069: use of closed network connection
	E0903 23:09:20.733777       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60072: use of closed network connection
	I0903 23:09:21.642799       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0903 23:09:31.241925       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60074: use of closed network connection
	E0903 23:09:31.756977       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60078: use of closed network connection
	I0903 23:09:40.272883       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0903 23:09:42.252494       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60080: use of closed network connection
	E0903 23:09:42.763780       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60083: use of closed network connection
	E0903 23:09:53.306527       1 conn.go:339] Error on socket receive: read tcp 172.25.127.254:8443->172.25.112.1:60085: use of closed network connection
	I0903 23:10:08.163988       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [7f593816c5b6] <==
	I0903 22:59:48.315750       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0903 22:59:48.315789       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0903 22:59:48.315794       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0903 22:59:48.315924       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0903 22:59:48.322052       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0903 22:59:48.322217       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0903 22:59:48.338405       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0903 22:59:48.347114       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0903 22:59:48.347957       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0903 22:59:48.348231       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0903 22:59:48.350413       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0903 22:59:48.350547       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-270000" podCIDRs=["10.244.0.0/24"]
	I0903 22:59:48.350648       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0903 22:59:48.350759       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0903 22:59:48.353465       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0903 22:59:48.354706       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0903 22:59:48.357072       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0903 22:59:48.361992       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0903 23:00:13.305555       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0903 23:03:47.037302       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-270000-m02\" does not exist"
	I0903 23:03:47.108103       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-270000-m02" podCIDRs=["10.244.1.0/24"]
	I0903 23:03:48.353030       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-270000-m02"
	I0903 23:07:53.159425       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-270000-m03\" does not exist"
	I0903 23:07:53.231028       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-270000-m03" podCIDRs=["10.244.2.0/24"]
	I0903 23:07:53.425938       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-270000-m03"
	
	
	==> kube-proxy [faad83036df8] <==
	I0903 22:59:50.779113       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0903 22:59:50.880419       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0903 22:59:50.880456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.25.116.52"]
	E0903 22:59:50.880565       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0903 22:59:50.945516       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0903 22:59:50.945819       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0903 22:59:50.945901       1 server_linux.go:132] "Using iptables Proxier"
	I0903 22:59:50.968216       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0903 22:59:50.968801       1 server.go:527] "Version info" version="v1.34.0"
	I0903 22:59:50.968824       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0903 22:59:50.971984       1 config.go:200] "Starting service config controller"
	I0903 22:59:50.972002       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0903 22:59:50.972020       1 config.go:106] "Starting endpoint slice config controller"
	I0903 22:59:50.972026       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0903 22:59:50.972040       1 config.go:403] "Starting serviceCIDR config controller"
	I0903 22:59:50.972046       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0903 22:59:50.972948       1 config.go:309] "Starting node config controller"
	I0903 22:59:50.972959       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0903 22:59:50.972966       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0903 22:59:51.072869       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0903 22:59:51.073049       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0903 22:59:51.073138       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5227167cf7b2] <==
	E0903 22:59:40.323705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0903 22:59:41.153882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0903 22:59:41.173416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0903 22:59:41.376404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0903 22:59:41.385951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0903 22:59:41.414190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0903 22:59:41.448426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0903 22:59:41.450071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0903 22:59:41.503378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0903 22:59:41.540777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0903 22:59:41.543644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0903 22:59:41.544942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0903 22:59:41.577465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0903 22:59:41.613671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0903 22:59:41.626322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0903 22:59:41.639109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0903 22:59:41.741517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0903 22:59:41.759315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0903 22:59:41.782523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0903 22:59:41.850382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0903 22:59:43.605124       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0903 23:07:53.694484       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dlw8c\": pod kube-proxy-dlw8c is already assigned to node \"ha-270000-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dlw8c" node="ha-270000-m03"
	E0903 23:07:53.694640       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4d595fbf-2bee-4651-a4fa-7ce87d747f6d(kube-system/kube-proxy-dlw8c) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-dlw8c"
	E0903 23:07:53.694680       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dlw8c\": pod kube-proxy-dlw8c is already assigned to node \"ha-270000-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-dlw8c"
	I0903 23:07:53.696086       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dlw8c" node="ha-270000-m03"
	
	
	==> kubelet <==
	Sep 03 22:59:48 ha-270000 kubelet[3184]: I0903 22:59:48.433551    3184 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 03 22:59:48 ha-270000 kubelet[3184]: I0903 22:59:48.434480    3184 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.257639    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqkm\" (UniqueName: \"kubernetes.io/projected/f609fa93-da46-46a5-ba36-84c291da86a5-kube-api-access-ngqkm\") pod \"kube-proxy-t96st\" (UID: \"f609fa93-da46-46a5-ba36-84c291da86a5\") " pod="kube-system/kube-proxy-t96st"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.258332    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f609fa93-da46-46a5-ba36-84c291da86a5-kube-proxy\") pod \"kube-proxy-t96st\" (UID: \"f609fa93-da46-46a5-ba36-84c291da86a5\") " pod="kube-system/kube-proxy-t96st"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.259426    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f609fa93-da46-46a5-ba36-84c291da86a5-xtables-lock\") pod \"kube-proxy-t96st\" (UID: \"f609fa93-da46-46a5-ba36-84c291da86a5\") " pod="kube-system/kube-proxy-t96st"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.259518    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f609fa93-da46-46a5-ba36-84c291da86a5-lib-modules\") pod \"kube-proxy-t96st\" (UID: \"f609fa93-da46-46a5-ba36-84c291da86a5\") " pod="kube-system/kube-proxy-t96st"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.360270    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32ea1443-99f0-4e56-99cb-d1ce43dbcb2f-xtables-lock\") pod \"kindnet-96trb\" (UID: \"32ea1443-99f0-4e56-99cb-d1ce43dbcb2f\") " pod="kube-system/kindnet-96trb"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.360999    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/32ea1443-99f0-4e56-99cb-d1ce43dbcb2f-cni-cfg\") pod \"kindnet-96trb\" (UID: \"32ea1443-99f0-4e56-99cb-d1ce43dbcb2f\") " pod="kube-system/kindnet-96trb"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.361173    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqpzw\" (UniqueName: \"kubernetes.io/projected/32ea1443-99f0-4e56-99cb-d1ce43dbcb2f-kube-api-access-gqpzw\") pod \"kindnet-96trb\" (UID: \"32ea1443-99f0-4e56-99cb-d1ce43dbcb2f\") " pod="kube-system/kindnet-96trb"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.361967    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32ea1443-99f0-4e56-99cb-d1ce43dbcb2f-lib-modules\") pod \"kindnet-96trb\" (UID: \"32ea1443-99f0-4e56-99cb-d1ce43dbcb2f\") " pod="kube-system/kindnet-96trb"
	Sep 03 22:59:50 ha-270000 kubelet[3184]: I0903 22:59:50.617131    3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14155ffe05146e4c150dfdd56e7ccbd470fdd08c24940763f0ce633cc7d9ca72"
	Sep 03 22:59:53 ha-270000 kubelet[3184]: I0903 22:59:53.290148    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t96st" podStartSLOduration=4.290132028 podStartE2EDuration="4.290132028s" podCreationTimestamp="2025-09-03 22:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 22:59:51.732225535 +0000 UTC m=+7.582181776" watchObservedRunningTime="2025-09-03 22:59:53.290132028 +0000 UTC m=+9.140088169"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.406228    3184 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.478149    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-96trb" podStartSLOduration=15.921778306 podStartE2EDuration="22.478116072s" podCreationTimestamp="2025-09-03 22:59:49 +0000 UTC" firstStartedPulling="2025-09-03 22:59:50.621148194 +0000 UTC m=+6.471104335" lastFinishedPulling="2025-09-03 22:59:57.17748596 +0000 UTC m=+13.027442101" observedRunningTime="2025-09-03 22:59:58.801321448 +0000 UTC m=+14.651277689" watchObservedRunningTime="2025-09-03 23:00:11.478116072 +0000 UTC m=+27.328072313"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.612413    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4c3bec4-9c47-404e-98ff-21e0aee82931-config-volume\") pod \"coredns-66bc5c9577-58qw9\" (UID: \"e4c3bec4-9c47-404e-98ff-21e0aee82931\") " pod="kube-system/coredns-66bc5c9577-58qw9"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.612826    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff84p\" (UniqueName: \"kubernetes.io/projected/e4c3bec4-9c47-404e-98ff-21e0aee82931-kube-api-access-ff84p\") pod \"coredns-66bc5c9577-58qw9\" (UID: \"e4c3bec4-9c47-404e-98ff-21e0aee82931\") " pod="kube-system/coredns-66bc5c9577-58qw9"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.612896    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7643327e-078c-45c9-9a32-cdf3b7a72986-tmp\") pod \"storage-provisioner\" (UID: \"7643327e-078c-45c9-9a32-cdf3b7a72986\") " pod="kube-system/storage-provisioner"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.612935    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtx9\" (UniqueName: \"kubernetes.io/projected/7643327e-078c-45c9-9a32-cdf3b7a72986-kube-api-access-2mtx9\") pod \"storage-provisioner\" (UID: \"7643327e-078c-45c9-9a32-cdf3b7a72986\") " pod="kube-system/storage-provisioner"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.713810    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20226b19-1d13-4057-88c1-709997f24868-config-volume\") pod \"coredns-66bc5c9577-cnk8d\" (UID: \"20226b19-1d13-4057-88c1-709997f24868\") " pod="kube-system/coredns-66bc5c9577-cnk8d"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.714105    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrjk6\" (UniqueName: \"kubernetes.io/projected/20226b19-1d13-4057-88c1-709997f24868-kube-api-access-lrjk6\") pod \"coredns-66bc5c9577-cnk8d\" (UID: \"20226b19-1d13-4057-88c1-709997f24868\") " pod="kube-system/coredns-66bc5c9577-cnk8d"
	Sep 03 23:00:14 ha-270000 kubelet[3184]: I0903 23:00:14.274197    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cnk8d" podStartSLOduration=25.274173958 podStartE2EDuration="25.274173958s" podCreationTimestamp="2025-09-03 22:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:00:14.230204239 +0000 UTC m=+30.080160480" watchObservedRunningTime="2025-09-03 23:00:14.274173958 +0000 UTC m=+30.124130099"
	Sep 03 23:00:14 ha-270000 kubelet[3184]: I0903 23:00:14.316561    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.316546059 podStartE2EDuration="16.316546059s" podCreationTimestamp="2025-09-03 22:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:00:14.315644548 +0000 UTC m=+30.165600789" watchObservedRunningTime="2025-09-03 23:00:14.316546059 +0000 UTC m=+30.166502200"
	Sep 03 23:00:14 ha-270000 kubelet[3184]: I0903 23:00:14.385082    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-58qw9" podStartSLOduration=25.385064668 podStartE2EDuration="25.385064668s" podCreationTimestamp="2025-09-03 22:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:00:14.382147734 +0000 UTC m=+30.232103975" watchObservedRunningTime="2025-09-03 23:00:14.385064668 +0000 UTC m=+30.235020909"
	Sep 03 23:09:08 ha-270000 kubelet[3184]: I0903 23:09:08.377355    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbtbn\" (UniqueName: \"kubernetes.io/projected/04bf5fc3-6c7c-4a98-b313-5409650649e3-kube-api-access-gbtbn\") pod \"busybox-7b57f96db7-lxhhz\" (UID: \"04bf5fc3-6c7c-4a98-b313-5409650649e3\") " pod="default/busybox-7b57f96db7-lxhhz"
	Sep 03 23:09:13 ha-270000 kubelet[3184]: I0903 23:09:13.347333    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-lxhhz" podStartSLOduration=3.108610599 podStartE2EDuration="5.347316014s" podCreationTimestamp="2025-09-03 23:09:08 +0000 UTC" firstStartedPulling="2025-09-03 23:09:09.659404437 +0000 UTC m=+565.509360578" lastFinishedPulling="2025-09-03 23:09:11.898109752 +0000 UTC m=+567.748065993" observedRunningTime="2025-09-03 23:09:13.346856608 +0000 UTC m=+569.196812849" watchObservedRunningTime="2025-09-03 23:09:13.347316014 +0000 UTC m=+569.197272255"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-270000 -n ha-270000
helpers_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-270000 -n ha-270000: (12.4039944s)
helpers_test.go:269: (dbg) Run:  kubectl --context ha-270000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (68.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (50.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-270000 node stop m02 --alsologtostderr -v 5: exit status 1 (15.3708155s)

                                                
                                                
-- stdout --
	* Stopping node "ha-270000-m02"  ...
	* Powering off "ha-270000-m02" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:26:25.214118   12580 out.go:360] Setting OutFile to fd 1740 ...
	I0903 23:26:25.325728   12580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:26:25.325728   12580 out.go:374] Setting ErrFile to fd 1344...
	I0903 23:26:25.325728   12580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:26:25.342126   12580 mustload.go:65] Loading cluster: ha-270000
	I0903 23:26:25.343126   12580 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:26:25.343126   12580 stop.go:39] StopHost: ha-270000-m02
	I0903 23:26:25.349132   12580 out.go:179] * Stopping node "ha-270000-m02"  ...
	I0903 23:26:25.352135   12580 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0903 23:26:25.364513   12580 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0903 23:26:25.364513   12580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:26:27.606441   12580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:26:27.606441   12580 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:26:27.606441   12580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:26:30.189626   12580 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:26:30.189626   12580 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:26:30.190340   12580 sshutil.go:53] new ssh client: &{IP:172.25.120.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\id_rsa Username:docker}
	I0903 23:26:30.317598   12580 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (4.9530153s)
	I0903 23:26:30.331982   12580 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0903 23:26:30.420305   12580 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0903 23:26:30.493482   12580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:26:32.659691   12580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:26:32.659691   12580 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:26:32.663579   12580 out.go:179] * Powering off "ha-270000-m02" via SSH ...
	I0903 23:26:32.672448   12580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:26:34.873293   12580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:26:34.873559   12580 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:26:34.873746   12580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:26:37.494777   12580 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:26:37.494777   12580 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:26:37.498234   12580 main.go:141] libmachine: Using SSH client type: native
	I0903 23:26:37.498234   12580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:26:37.498234   12580 main.go:141] libmachine: About to run SSH command:
	sudo poweroff
	I0903 23:26:37.692207   12580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:26:37.692267   12580 stop.go:100] poweroff result: out=, err=<nil>
	I0903 23:26:37.692267   12580 main.go:141] libmachine: Stopping "ha-270000-m02"...
	I0903 23:26:37.692494   12580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-windows-amd64.exe -p ha-270000 node stop m02 --alsologtostderr -v 5": exit status 1
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-270000 status --alsologtostderr -v 5: context deadline exceeded (56µs)
ha_test.go:374: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-270000 status --alsologtostderr -v 5" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-270000 -n ha-270000
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-270000 -n ha-270000: (12.3133062s)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 logs -n 25: (8.8686495s)
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │  PROFILE  │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2830829437\001\cp-test_ha-270000-m03.txt │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:21 UTC │ 03 Sep 25 23:22 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m03 sudo cat /home/docker/cp-test.txt                                                                                                              │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:22 UTC │ 03 Sep 25 23:22 UTC │
	│ cp      │ ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m03_ha-270000.txt                                                                │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:22 UTC │ 03 Sep 25 23:22 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m03 sudo cat /home/docker/cp-test.txt                                                                                                              │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:22 UTC │ 03 Sep 25 23:22 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000 sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000.txt                                                                                          │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:22 UTC │ 03 Sep 25 23:22 UTC │
	│ cp      │ ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000-m03_ha-270000-m02.txt                                                        │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:22 UTC │ 03 Sep 25 23:23 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m03 sudo cat /home/docker/cp-test.txt                                                                                                              │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:23 UTC │ 03 Sep 25 23:23 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m02 sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000-m02.txt                                                                                  │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:23 UTC │ 03 Sep 25 23:23 UTC │
	│ cp      │ ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000-m04:/home/docker/cp-test_ha-270000-m03_ha-270000-m04.txt                                                        │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:23 UTC │ 03 Sep 25 23:23 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m03 sudo cat /home/docker/cp-test.txt                                                                                                              │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:23 UTC │ 03 Sep 25 23:23 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m04 sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000-m04.txt                                                                                  │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:23 UTC │ 03 Sep 25 23:24 UTC │
	│ cp      │ ha-270000 cp testdata\cp-test.txt ha-270000-m04:/home/docker/cp-test.txt                                                                                                      │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:24 UTC │ 03 Sep 25 23:24 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m04 sudo cat /home/docker/cp-test.txt                                                                                                              │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:24 UTC │ 03 Sep 25 23:24 UTC │
	│ cp      │ ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2830829437\001\cp-test_ha-270000-m04.txt │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:24 UTC │ 03 Sep 25 23:24 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m04 sudo cat /home/docker/cp-test.txt                                                                                                              │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:24 UTC │ 03 Sep 25 23:24 UTC │
	│ cp      │ ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m04_ha-270000.txt                                                                │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:24 UTC │ 03 Sep 25 23:24 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m04 sudo cat /home/docker/cp-test.txt                                                                                                              │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:24 UTC │ 03 Sep 25 23:25 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000 sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000.txt                                                                                          │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │ 03 Sep 25 23:25 UTC │
	│ cp      │ ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000-m04_ha-270000-m02.txt                                                        │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │ 03 Sep 25 23:25 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m04 sudo cat /home/docker/cp-test.txt                                                                                                              │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │ 03 Sep 25 23:25 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m02 sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000-m02.txt                                                                                  │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │ 03 Sep 25 23:25 UTC │
	│ cp      │ ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000-m03:/home/docker/cp-test_ha-270000-m04_ha-270000-m03.txt                                                        │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │ 03 Sep 25 23:26 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m04 sudo cat /home/docker/cp-test.txt                                                                                                              │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:26 UTC │ 03 Sep 25 23:26 UTC │
	│ ssh     │ ha-270000 ssh -n ha-270000-m03 sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000-m03.txt                                                                                  │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:26 UTC │ 03 Sep 25 23:26 UTC │
	│ node    │ ha-270000 node stop m02 --alsologtostderr -v 5                                                                                                                                │ ha-270000 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 22:56:40
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 22:56:40.560554    1228 out.go:360] Setting OutFile to fd 1384 ...
	I0903 22:56:40.633073    1228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:56:40.633073    1228 out.go:374] Setting ErrFile to fd 1116...
	I0903 22:56:40.633073    1228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:56:40.652287    1228 out.go:368] Setting JSON to false
	I0903 22:56:40.655755    1228 start.go:130] hostinfo: {"hostname":"minikube6","uptime":23305,"bootTime":1756916894,"procs":177,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0903 22:56:40.655847    1228 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0903 22:56:40.659982    1228 out.go:179] * [ha-270000] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0903 22:56:40.668009    1228 notify.go:220] Checking for updates...
	I0903 22:56:40.670114    1228 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0903 22:56:40.673786    1228 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 22:56:40.676878    1228 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0903 22:56:40.680635    1228 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 22:56:40.683147    1228 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 22:56:40.689512    1228 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 22:56:45.855224    1228 out.go:179] * Using the hyperv driver based on user configuration
	I0903 22:56:45.859601    1228 start.go:304] selected driver: hyperv
	I0903 22:56:45.859601    1228 start.go:918] validating driver "hyperv" against <nil>
	I0903 22:56:45.859601    1228 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 22:56:45.906781    1228 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 22:56:45.908743    1228 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 22:56:45.908743    1228 cni.go:84] Creating CNI manager for ""
	I0903 22:56:45.908743    1228 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0903 22:56:45.908743    1228 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0903 22:56:45.908743    1228 start.go:348] cluster config:
	{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s}
	I0903 22:56:45.908743    1228 iso.go:125] acquiring lock: {Name:mk966bde02eeea119c68f0830e579f0a83ec9e11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 22:56:45.912958    1228 out.go:179] * Starting "ha-270000" primary control-plane node in "ha-270000" cluster
	I0903 22:56:45.916152    1228 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 22:56:45.916152    1228 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0903 22:56:45.917130    1228 cache.go:58] Caching tarball of preloaded images
	I0903 22:56:45.917258    1228 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0903 22:56:45.917258    1228 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0903 22:56:45.917747    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 22:56:45.918325    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json: {Name:mk66003acb5cfca8863a58eed44798c01e27bcf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:56:45.918495    1228 start.go:360] acquireMachinesLock for ha-270000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 22:56:45.919503    1228 start.go:364] duration metric: took 1.008ms to acquireMachinesLock for "ha-270000"
	I0903 22:56:45.919689    1228 start.go:93] Provisioning new machine with config: &{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 22:56:45.919689    1228 start.go:125] createHost starting for "" (driver="hyperv")
	I0903 22:56:45.925004    1228 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 22:56:45.925733    1228 start.go:159] libmachine.API.Create for "ha-270000" (driver="hyperv")
	I0903 22:56:45.925733    1228 client.go:168] LocalClient.Create starting
	I0903 22:56:45.926025    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0903 22:56:45.926633    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 22:56:45.926673    1228 main.go:141] libmachine: Parsing certificate...
	I0903 22:56:45.927006    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0903 22:56:45.927305    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 22:56:45.927305    1228 main.go:141] libmachine: Parsing certificate...
	I0903 22:56:45.927484    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0903 22:56:48.016664    1228 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0903 22:56:48.016864    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:56:48.016999    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0903 22:56:49.751836    1228 main.go:141] libmachine: [stdout =====>] : False
	
	I0903 22:56:49.751836    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:56:49.751836    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 22:56:51.299486    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 22:56:51.299562    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:56:51.299562    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 22:56:54.893059    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 22:56:54.893059    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:56:54.895436    1228 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 22:56:55.547200    1228 main.go:141] libmachine: Creating SSH key...
	I0903 22:56:55.694695    1228 main.go:141] libmachine: Creating VM...
	I0903 22:56:55.694695    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 22:56:58.488314    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 22:56:58.488376    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:56:58.488376    1228 main.go:141] libmachine: Using switch "Default Switch"
	I0903 22:56:58.488376    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 22:57:00.159456    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 22:57:00.160163    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:00.160163    1228 main.go:141] libmachine: Creating VHD
	I0903 22:57:00.160368    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0903 22:57:03.740838    1228 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : ABB2571A-43DB-4DDE-8704-A111EA40BF83
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0903 22:57:03.741294    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:03.741294    1228 main.go:141] libmachine: Writing magic tar header
	I0903 22:57:03.741294    1228 main.go:141] libmachine: Writing SSH key tar header
	I0903 22:57:03.753725    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0903 22:57:06.785123    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:06.785237    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:06.785359    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\disk.vhd' -SizeBytes 20000MB
	I0903 22:57:09.246904    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:09.247119    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:09.247236    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-270000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0903 22:57:12.884912    1228 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-270000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0903 22:57:12.885137    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:12.885137    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-270000 -DynamicMemoryEnabled $false
	I0903 22:57:15.057904    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:15.058186    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:15.058186    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-270000 -Count 2
	I0903 22:57:17.162536    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:17.162627    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:17.162627    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-270000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\boot2docker.iso'
	I0903 22:57:19.686005    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:19.686005    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:19.686896    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-270000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\disk.vhd'
	I0903 22:57:22.319331    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:22.319331    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:22.320086    1228 main.go:141] libmachine: Starting VM...
	I0903 22:57:22.320150    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-270000
	I0903 22:57:25.353871    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:25.353871    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:25.353871    1228 main.go:141] libmachine: Waiting for host to start...
	I0903 22:57:25.353871    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:27.536869    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:27.537246    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:27.537323    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:29.943283    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:29.943462    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:30.944809    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:33.070346    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:33.071425    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:33.071425    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:35.532133    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:35.532133    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:36.533608    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:38.672634    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:38.672634    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:38.672884    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:41.182817    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:41.182817    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:42.183902    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:44.381000    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:44.381220    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:44.381220    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:46.853303    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 22:57:46.853946    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:47.854276    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:49.941137    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:49.941137    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:49.941137    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:52.463528    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:57:52.463528    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:52.463852    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:54.507713    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:54.507713    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:54.507713    1228 machine.go:93] provisionDockerMachine start ...
	I0903 22:57:54.508737    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:57:56.547110    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:57:56.547630    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:56.547630    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:57:59.000195    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:57:59.001319    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:57:59.007347    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:57:59.022361    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:57:59.022361    1228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 22:57:59.161919    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 22:57:59.162111    1228 buildroot.go:166] provisioning hostname "ha-270000"
	I0903 22:57:59.162186    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:01.167715    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:01.167715    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:01.167715    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:03.603986    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:03.604287    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:03.613095    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:03.613891    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:03.613891    1228 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-270000 && echo "ha-270000" | sudo tee /etc/hostname
	I0903 22:58:03.781937    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-270000
	
	I0903 22:58:03.782069    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:05.806968    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:05.807221    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:05.807221    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:08.214073    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:08.214073    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:08.220343    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:08.220903    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:08.220903    1228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-270000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-270000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-270000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 22:58:08.371173    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 22:58:08.371173    1228 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0903 22:58:08.371880    1228 buildroot.go:174] setting up certificates
	I0903 22:58:08.371880    1228 provision.go:84] configureAuth start
	I0903 22:58:08.371880    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:10.384354    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:10.385445    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:10.385589    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:12.797827    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:12.797827    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:12.797827    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:14.827178    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:14.827488    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:14.827557    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:17.256928    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:17.257487    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:17.257487    1228 provision.go:143] copyHostCerts
	I0903 22:58:17.257487    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0903 22:58:17.258601    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0903 22:58:17.258601    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0903 22:58:17.259378    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0903 22:58:17.260678    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0903 22:58:17.260925    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0903 22:58:17.260925    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0903 22:58:17.261529    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0903 22:58:17.262247    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0903 22:58:17.262895    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0903 22:58:17.262895    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0903 22:58:17.263660    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0903 22:58:17.264895    1228 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-270000 san=[127.0.0.1 172.25.116.52 ha-270000 localhost minikube]
	I0903 22:58:17.319551    1228 provision.go:177] copyRemoteCerts
	I0903 22:58:17.331089    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 22:58:17.331089    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:19.404625    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:19.404625    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:19.405155    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:21.798001    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:21.798199    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:21.798706    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:58:21.913655    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5823926s)
	I0903 22:58:21.913655    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0903 22:58:21.913877    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0903 22:58:21.963666    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0903 22:58:21.964199    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 22:58:22.018058    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0903 22:58:22.018058    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 22:58:22.083090    1228 provision.go:87] duration metric: took 13.7109324s to configureAuth
	I0903 22:58:22.083165    1228 buildroot.go:189] setting minikube options for container-runtime
	I0903 22:58:22.083815    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 22:58:22.083915    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:24.183310    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:24.183961    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:24.184029    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:26.736388    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:26.736388    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:26.742143    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:26.742976    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:26.743061    1228 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0903 22:58:26.879239    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0903 22:58:26.879322    1228 buildroot.go:70] root file system type: tmpfs
	I0903 22:58:26.879432    1228 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0903 22:58:26.879432    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:28.965030    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:28.965187    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:28.965187    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:31.360709    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:31.360709    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:31.367391    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:31.368210    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:31.368210    1228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0903 22:58:31.539576    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0903 22:58:31.539705    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:33.562683    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:33.562729    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:33.562729    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:35.965536    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:35.965883    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:35.970665    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:35.971426    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:35.971426    1228 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0903 22:58:37.383503    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0903 22:58:37.383503    1228 machine.go:96] duration metric: took 42.8752051s to provisionDockerMachine
	I0903 22:58:37.383503    1228 client.go:171] duration metric: took 1m51.4562454s to LocalClient.Create
	I0903 22:58:37.383503    1228 start.go:167] duration metric: took 1m51.4562886s to libmachine.API.Create "ha-270000"
	I0903 22:58:37.383503    1228 start.go:293] postStartSetup for "ha-270000" (driver="hyperv")
	I0903 22:58:37.383503    1228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 22:58:37.397578    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 22:58:37.397578    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:39.484896    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:39.484940    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:39.484940    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:41.882170    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:41.883164    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:41.883643    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:58:41.990226    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.592585s)
	I0903 22:58:42.002247    1228 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 22:58:42.009186    1228 info.go:137] Remote host: Buildroot 2025.02
	I0903 22:58:42.009186    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0903 22:58:42.009366    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0903 22:58:42.010687    1228 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0903 22:58:42.010774    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0903 22:58:42.022820    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 22:58:42.045622    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0903 22:58:42.096557    1228 start.go:296] duration metric: took 4.7129898s for postStartSetup
	I0903 22:58:42.100709    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:44.072469    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:44.073264    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:44.073264    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:46.511540    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:46.511685    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:46.512018    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 22:58:46.515059    1228 start.go:128] duration metric: took 2m0.5937211s to createHost
	I0903 22:58:46.515247    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:48.530992    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:48.530992    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:48.530992    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:50.975901    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:50.975901    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:50.982075    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:50.982075    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:50.982075    1228 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 22:58:51.110851    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756940331.135952045
	
	I0903 22:58:51.110851    1228 fix.go:216] guest clock: 1756940331.135952045
	I0903 22:58:51.110851    1228 fix.go:229] Guest: 2025-09-03 22:58:51.135952045 +0000 UTC Remote: 2025-09-03 22:58:46.5151379 +0000 UTC m=+126.059727001 (delta=4.620814145s)
	I0903 22:58:51.110851    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:53.204581    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:53.204581    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:53.205518    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:58:55.673742    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:58:55.673742    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:55.682995    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 22:58:55.683822    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.116.52 22 <nil> <nil>}
	I0903 22:58:55.683822    1228 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756940331
	I0903 22:58:55.843455    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Sep  3 22:58:51 UTC 2025
	
	I0903 22:58:55.843455    1228 fix.go:236] clock set: Wed Sep  3 22:58:51 UTC 2025
	 (err=<nil>)
	I0903 22:58:55.843455    1228 start.go:83] releasing machines lock for "ha-270000", held for 2m9.9221404s
	I0903 22:58:55.843455    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:58:57.907698    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:58:57.907698    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:58:57.907698    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:59:00.349438    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:59:00.349438    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:00.354365    1228 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0903 22:59:00.354523    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:00.365673    1228 ssh_runner.go:195] Run: cat /version.json
	I0903 22:59:00.365673    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:02.475191    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:02.475191    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:02.475424    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:59:02.475424    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:02.475424    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:02.475424    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:59:05.066510    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:59:05.066545    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:05.067165    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:59:05.095848    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:59:05.095848    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:05.096453    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:59:05.161490    1228 ssh_runner.go:235] Completed: cat /version.json: (4.795752s)
	I0903 22:59:05.175138    1228 ssh_runner.go:195] Run: systemctl --version
	I0903 22:59:05.179441    1228 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8250107s)
	W0903 22:59:05.179441    1228 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0903 22:59:05.197186    1228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 22:59:05.205723    1228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 22:59:05.218533    1228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 22:59:05.247657    1228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 22:59:05.247687    1228 start.go:495] detecting cgroup driver to use...
	I0903 22:59:05.247687    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 22:59:05.301007    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0903 22:59:05.334929    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0903 22:59:05.344907    1228 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0903 22:59:05.344907    1228 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0903 22:59:05.363500    1228 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0903 22:59:05.375785    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0903 22:59:05.408814    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 22:59:05.443170    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0903 22:59:05.474233    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 22:59:05.521546    1228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 22:59:05.553883    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0903 22:59:05.588468    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0903 22:59:05.621710    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0903 22:59:05.654708    1228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 22:59:05.671454    1228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 22:59:05.683039    1228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 22:59:05.712788    1228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 22:59:05.744475    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:05.959396    1228 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0903 22:59:06.024145    1228 start.go:495] detecting cgroup driver to use...
	I0903 22:59:06.035150    1228 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0903 22:59:06.076305    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 22:59:06.110621    1228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 22:59:06.156621    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 22:59:06.192292    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 22:59:06.230075    1228 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0903 22:59:06.299289    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 22:59:06.323084    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 22:59:06.368994    1228 ssh_runner.go:195] Run: which cri-dockerd
	I0903 22:59:06.388692    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0903 22:59:06.408906    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0903 22:59:06.456171    1228 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0903 22:59:06.678583    1228 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0903 22:59:06.879109    1228 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0903 22:59:06.879109    1228 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0903 22:59:06.925446    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 22:59:06.959185    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:07.199630    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 22:59:07.386342    1228 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0903 22:59:07.423569    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 22:59:07.460030    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 22:59:07.505005    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:07.752256    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 22:59:08.742023    1228 retry.go:31] will retry after 1.216709918s: docker not running
	I0903 22:59:09.972425    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 22:59:10.015636    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0903 22:59:10.054647    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 22:59:10.095481    1228 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0903 22:59:10.329998    1228 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0903 22:59:10.572624    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:10.798123    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0903 22:59:10.856052    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0903 22:59:10.890575    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:11.116450    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0903 22:59:11.266811    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 22:59:11.293390    1228 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0903 22:59:11.307088    1228 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0903 22:59:11.316839    1228 start.go:563] Will wait 60s for crictl version
	I0903 22:59:11.328011    1228 ssh_runner.go:195] Run: which crictl
	I0903 22:59:11.345410    1228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 22:59:11.395997    1228 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0903 22:59:11.406747    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 22:59:11.450319    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 22:59:11.494154    1228 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0903 22:59:11.494154    1228 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0903 22:59:11.498371    1228 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0903 22:59:11.499132    1228 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0903 22:59:11.499132    1228 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0903 22:59:11.499132    1228 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0903 22:59:11.502130    1228 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0903 22:59:11.502130    1228 ip.go:218] interface addr: 172.25.112.1/20
	I0903 22:59:11.514219    1228 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0903 22:59:11.519778    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 22:59:11.553173    1228 kubeadm.go:875] updating cluster {Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 22:59:11.553489    1228 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 22:59:11.563125    1228 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0903 22:59:11.586673    1228 docker.go:691] Got preloaded images: 
	I0903 22:59:11.586673    1228 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0903 22:59:11.598082    1228 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0903 22:59:11.629359    1228 ssh_runner.go:195] Run: which lz4
	I0903 22:59:11.635686    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0903 22:59:11.648294    1228 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 22:59:11.654997    1228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 22:59:11.655342    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0903 22:59:13.296067    1228 docker.go:655] duration metric: took 1.6600156s to copy over tarball
	I0903 22:59:13.307060    1228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 22:59:20.865355    1228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (7.5581917s)
	I0903 22:59:20.865462    1228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 22:59:20.933546    1228 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0903 22:59:20.957804    1228 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0903 22:59:21.002763    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 22:59:21.038331    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:21.284391    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 22:59:23.488193    1228 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.2036927s)
	I0903 22:59:23.500025    1228 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0903 22:59:23.529727    1228 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0903 22:59:23.529794    1228 cache_images.go:85] Images are preloaded, skipping loading
	I0903 22:59:23.529794    1228 kubeadm.go:926] updating node { 172.25.116.52 8443 v1.34.0 docker true true} ...
	I0903 22:59:23.529794    1228 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-270000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.116.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 22:59:23.540680    1228 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0903 22:59:23.608509    1228 cni.go:84] Creating CNI manager for ""
	I0903 22:59:23.608543    1228 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0903 22:59:23.608628    1228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 22:59:23.608709    1228 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.116.52 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-270000 NodeName:ha-270000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.116.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.116.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 22:59:23.608738    1228 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.116.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-270000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.116.52"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.116.52"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 22:59:23.608738    1228 kube-vip.go:115] generating kube-vip config ...
	I0903 22:59:23.621691    1228 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0903 22:59:23.653809    1228 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0903 22:59:23.654126    1228 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0903 22:59:23.666808    1228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 22:59:23.685598    1228 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 22:59:23.698227    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0903 22:59:23.718760    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0903 22:59:23.755199    1228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 22:59:23.789033    1228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0903 22:59:23.822970    1228 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0903 22:59:23.874633    1228 ssh_runner.go:195] Run: grep 172.25.127.254	control-plane.minikube.internal$ /etc/hosts
	I0903 22:59:23.882338    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 22:59:23.915252    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:59:24.145793    1228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 22:59:24.197926    1228 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000 for IP: 172.25.116.52
	I0903 22:59:24.197956    1228 certs.go:194] generating shared ca certs ...
	I0903 22:59:24.198007    1228 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:24.198893    1228 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0903 22:59:24.199396    1228 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0903 22:59:24.199456    1228 certs.go:256] generating profile certs ...
	I0903 22:59:24.200409    1228 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.key
	I0903 22:59:24.200591    1228 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.crt with IP's: []
	I0903 22:59:24.887505    1228 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.crt ...
	I0903 22:59:24.887505    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.crt: {Name:mkb7aaa1eac443ddcdcabb4cef5bb739e9d38af9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:24.888985    1228 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.key ...
	I0903 22:59:24.888985    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.key: {Name:mkc5b79577653c8f04349871260874ebd30aa001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:24.889458    1228 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.9f1c9bfe
	I0903 22:59:24.889458    1228 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.9f1c9bfe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.116.52 172.25.127.254]
	I0903 22:59:24.972533    1228 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.9f1c9bfe ...
	I0903 22:59:24.972533    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.9f1c9bfe: {Name:mkc5ecbd182ead24488b0bd7ce60227ca749e5fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:24.974572    1228 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.9f1c9bfe ...
	I0903 22:59:24.974572    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.9f1c9bfe: {Name:mk823ce6e6d376d463e4c5c3be67b708c72c9bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:24.977545    1228 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.9f1c9bfe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt
	I0903 22:59:24.988484    1228 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.9f1c9bfe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key
	I0903 22:59:24.990570    1228 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key
	I0903 22:59:24.990570    1228 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt with IP's: []
	I0903 22:59:25.149477    1228 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt ...
	I0903 22:59:25.149477    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt: {Name:mk37d7e8a33d45a73e07c2e5522d69b31733f450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:25.151837    1228 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key ...
	I0903 22:59:25.151837    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key: {Name:mk79e180a6069c7b0284816924a3968ca51e1f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:25.152837    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0903 22:59:25.153300    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0903 22:59:25.153574    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0903 22:59:25.153844    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0903 22:59:25.154060    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0903 22:59:25.154230    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0903 22:59:25.154230    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0903 22:59:25.166019    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0903 22:59:25.166999    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0903 22:59:25.167916    1228 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0903 22:59:25.167916    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0903 22:59:25.168261    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0903 22:59:25.168626    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0903 22:59:25.169204    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0903 22:59:25.169514    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0903 22:59:25.169514    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0903 22:59:25.170418    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0903 22:59:25.170418    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:59:25.171306    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 22:59:25.225766    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 22:59:25.280504    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 22:59:25.336761    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0903 22:59:25.415957    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0903 22:59:25.487966    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 22:59:25.562028    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 22:59:25.627079    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 22:59:25.686633    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0903 22:59:25.741231    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0903 22:59:25.787473    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 22:59:25.839569    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 22:59:25.887378    1228 ssh_runner.go:195] Run: openssl version
	I0903 22:59:25.908881    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0903 22:59:25.942912    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0903 22:59:25.949548    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0903 22:59:25.961629    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0903 22:59:25.987716    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0903 22:59:26.022131    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0903 22:59:26.067732    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0903 22:59:26.076973    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0903 22:59:26.091872    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0903 22:59:26.119482    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 22:59:26.163297    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 22:59:26.223621    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:59:26.232849    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:59:26.248494    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:59:26.276462    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 22:59:26.316381    1228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 22:59:26.324206    1228 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 22:59:26.324684    1228 kubeadm.go:392] StartCluster: {Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 22:59:26.336642    1228 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0903 22:59:26.375638    1228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 22:59:26.421667    1228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 22:59:26.462196    1228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 22:59:26.486171    1228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 22:59:26.486171    1228 kubeadm.go:157] found existing configuration files:
	
	I0903 22:59:26.498171    1228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 22:59:26.516161    1228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 22:59:26.527161    1228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 22:59:26.563353    1228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 22:59:26.585736    1228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 22:59:26.596787    1228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 22:59:26.626847    1228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 22:59:26.650644    1228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 22:59:26.664473    1228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 22:59:26.699851    1228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 22:59:26.719594    1228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 22:59:26.731666    1228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 22:59:26.751379    1228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 22:59:26.978629    1228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 22:59:44.810760    1228 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0903 22:59:44.810760    1228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 22:59:44.810760    1228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 22:59:44.811527    1228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 22:59:44.811679    1228 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0903 22:59:44.811679    1228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 22:59:44.815084    1228 out.go:252]   - Generating certificates and keys ...
	I0903 22:59:44.815205    1228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 22:59:44.815205    1228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 22:59:44.815205    1228 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0903 22:59:44.815882    1228 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0903 22:59:44.815882    1228 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0903 22:59:44.815882    1228 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0903 22:59:44.815882    1228 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0903 22:59:44.816689    1228 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-270000 localhost] and IPs [172.25.116.52 127.0.0.1 ::1]
	I0903 22:59:44.816754    1228 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0903 22:59:44.816754    1228 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-270000 localhost] and IPs [172.25.116.52 127.0.0.1 ::1]
	I0903 22:59:44.816754    1228 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0903 22:59:44.817441    1228 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0903 22:59:44.817530    1228 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0903 22:59:44.817719    1228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 22:59:44.817719    1228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 22:59:44.817719    1228 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0903 22:59:44.817719    1228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 22:59:44.818413    1228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 22:59:44.818600    1228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 22:59:44.818698    1228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 22:59:44.818698    1228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 22:59:44.822293    1228 out.go:252]   - Booting up control plane ...
	I0903 22:59:44.822293    1228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 22:59:44.822293    1228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 22:59:44.823100    1228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 22:59:44.823123    1228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 22:59:44.823123    1228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0903 22:59:44.823706    1228 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0903 22:59:44.823871    1228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 22:59:44.823871    1228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 22:59:44.823871    1228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0903 22:59:44.823871    1228 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0903 22:59:44.823871    1228 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.57863ms
	I0903 22:59:44.823871    1228 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0903 22:59:44.823871    1228 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://172.25.116.52:8443/livez
	I0903 22:59:44.825077    1228 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0903 22:59:44.825312    1228 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0903 22:59:44.825470    1228 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 5.564494726s
	I0903 22:59:44.825512    1228 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 6.900831099s
	I0903 22:59:44.825859    1228 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 10.003335069s
	I0903 22:59:44.825911    1228 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0903 22:59:44.825911    1228 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0903 22:59:44.826613    1228 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0903 22:59:44.826763    1228 kubeadm.go:310] [mark-control-plane] Marking the node ha-270000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0903 22:59:44.827359    1228 kubeadm.go:310] [bootstrap-token] Using token: 128eq1.2kh3zrs5ds3cj6iy
	I0903 22:59:44.830041    1228 out.go:252]   - Configuring RBAC rules ...
	I0903 22:59:44.830354    1228 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0903 22:59:44.830509    1228 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0903 22:59:44.830972    1228 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0903 22:59:44.831410    1228 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0903 22:59:44.831817    1228 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0903 22:59:44.832156    1228 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0903 22:59:44.832350    1228 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0903 22:59:44.832423    1228 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0903 22:59:44.832600    1228 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0903 22:59:44.832600    1228 kubeadm.go:310] 
	I0903 22:59:44.832600    1228 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0903 22:59:44.832801    1228 kubeadm.go:310] 
	I0903 22:59:44.833058    1228 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0903 22:59:44.833099    1228 kubeadm.go:310] 
	I0903 22:59:44.833099    1228 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0903 22:59:44.833099    1228 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0903 22:59:44.833099    1228 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0903 22:59:44.833099    1228 kubeadm.go:310] 
	I0903 22:59:44.833099    1228 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0903 22:59:44.833099    1228 kubeadm.go:310] 
	I0903 22:59:44.833099    1228 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0903 22:59:44.833099    1228 kubeadm.go:310] 
	I0903 22:59:44.833718    1228 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0903 22:59:44.833815    1228 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0903 22:59:44.833815    1228 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0903 22:59:44.833815    1228 kubeadm.go:310] 
	I0903 22:59:44.833815    1228 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0903 22:59:44.834388    1228 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0903 22:59:44.834426    1228 kubeadm.go:310] 
	I0903 22:59:44.834426    1228 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 128eq1.2kh3zrs5ds3cj6iy \
	I0903 22:59:44.834426    1228 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 \
	I0903 22:59:44.834426    1228 kubeadm.go:310] 	--control-plane 
	I0903 22:59:44.834426    1228 kubeadm.go:310] 
	I0903 22:59:44.835022    1228 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0903 22:59:44.835134    1228 kubeadm.go:310] 
	I0903 22:59:44.835206    1228 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 128eq1.2kh3zrs5ds3cj6iy \
	I0903 22:59:44.835206    1228 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 
	I0903 22:59:44.835206    1228 cni.go:84] Creating CNI manager for ""
	I0903 22:59:44.835206    1228 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0903 22:59:44.838358    1228 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0903 22:59:44.854466    1228 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0903 22:59:44.864658    1228 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0903 22:59:44.864658    1228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0903 22:59:44.918774    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0903 22:59:45.313364    1228 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0903 22:59:45.329241    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:45.332401    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-270000 minikube.k8s.io/updated_at=2025_09_03T22_59_45_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb minikube.k8s.io/name=ha-270000 minikube.k8s.io/primary=true
	I0903 22:59:45.393884    1228 ops.go:34] apiserver oom_adj: -16
	I0903 22:59:45.547506    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:46.046702    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:46.546264    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:47.045032    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:47.547870    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:48.047805    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:48.548063    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:49.047476    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:49.547038    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:59:49.705088    1228 kubeadm.go:1105] duration metric: took 4.3915219s to wait for elevateKubeSystemPrivileges
	I0903 22:59:49.705088    1228 kubeadm.go:394] duration metric: took 23.3801619s to StartCluster
	I0903 22:59:49.705088    1228 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:49.705088    1228 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0903 22:59:49.707409    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:59:49.708771    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0903 22:59:49.708771    1228 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 22:59:49.709312    1228 start.go:241] waiting for startup goroutines ...
	I0903 22:59:49.708771    1228 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0903 22:59:49.709487    1228 addons.go:69] Setting storage-provisioner=true in profile "ha-270000"
	I0903 22:59:49.709487    1228 addons.go:69] Setting default-storageclass=true in profile "ha-270000"
	I0903 22:59:49.709487    1228 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-270000"
	I0903 22:59:49.709487    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 22:59:49.709487    1228 addons.go:238] Setting addon storage-provisioner=true in "ha-270000"
	I0903 22:59:49.709487    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 22:59:49.710228    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:49.711086    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:49.888307    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0903 22:59:50.342829    1228 start.go:976] {"host.minikube.internal": 172.25.112.1} host record injected into CoreDNS's ConfigMap
	I0903 22:59:51.992108    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:51.992165    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:51.993027    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:51.993027    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:51.995201    1228 kapi.go:59] client config for ha-270000: &rest.Config{Host:"https://172.25.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0903 22:59:51.995201    1228 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 22:59:51.997683    1228 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0903 22:59:51.997855    1228 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0903 22:59:51.997921    1228 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0903 22:59:51.997950    1228 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0903 22:59:51.997950    1228 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0903 22:59:51.997968    1228 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0903 22:59:51.998492    1228 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 22:59:51.998523    1228 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 22:59:51.998523    1228 addons.go:238] Setting addon default-storageclass=true in "ha-270000"
	I0903 22:59:51.998523    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:51.998523    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 22:59:51.999775    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:59:54.396260    1228 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 22:59:54.396260    1228 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 22:59:54.396260    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 22:59:56.602798    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 22:59:56.603007    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:56.603086    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 22:59:57.131273    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:59:57.132296    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:57.134594    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:59:57.292276    1228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 22:59:59.195658    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 22:59:59.196377    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 22:59:59.196748    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 22:59:59.345758    1228 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 22:59:59.521179    1228 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0903 22:59:59.523303    1228 addons.go:514] duration metric: took 9.8143984s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0903 22:59:59.523303    1228 start.go:246] waiting for cluster config update ...
	I0903 22:59:59.523303    1228 start.go:255] writing updated cluster config ...
	I0903 22:59:59.529168    1228 out.go:203] 
	I0903 22:59:59.543756    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 22:59:59.543756    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 22:59:59.551829    1228 out.go:179] * Starting "ha-270000-m02" control-plane node in "ha-270000" cluster
	I0903 22:59:59.555929    1228 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 22:59:59.555929    1228 cache.go:58] Caching tarball of preloaded images
	I0903 22:59:59.556753    1228 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0903 22:59:59.556753    1228 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0903 22:59:59.556753    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 22:59:59.564917    1228 start.go:360] acquireMachinesLock for ha-270000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 22:59:59.564995    1228 start.go:364] duration metric: took 77.6µs to acquireMachinesLock for "ha-270000-m02"
	I0903 22:59:59.564995    1228 start.go:93] Provisioning new machine with config: &{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 22:59:59.564995    1228 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0903 22:59:59.567675    1228 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 22:59:59.567675    1228 start.go:159] libmachine.API.Create for "ha-270000" (driver="hyperv")
	I0903 22:59:59.567675    1228 client.go:168] LocalClient.Create starting
	I0903 22:59:59.568588    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0903 22:59:59.568588    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 22:59:59.568588    1228 main.go:141] libmachine: Parsing certificate...
	I0903 22:59:59.568588    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0903 22:59:59.569581    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 22:59:59.569581    1228 main.go:141] libmachine: Parsing certificate...
	I0903 22:59:59.569581    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0903 23:00:01.434365    1228 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0903 23:00:01.434365    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:01.434365    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0903 23:00:03.206336    1228 main.go:141] libmachine: [stdout =====>] : False
	
	I0903 23:00:03.206456    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:03.206511    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 23:00:04.694654    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 23:00:04.694882    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:04.694882    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 23:00:08.310297    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 23:00:08.310547    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:08.312413    1228 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 23:00:08.977686    1228 main.go:141] libmachine: Creating SSH key...
	I0903 23:00:09.279116    1228 main.go:141] libmachine: Creating VM...
	I0903 23:00:09.279116    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 23:00:12.161709    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 23:00:12.162233    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:12.162233    1228 main.go:141] libmachine: Using switch "Default Switch"
	I0903 23:00:12.162233    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 23:00:13.955171    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 23:00:13.955470    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:13.955470    1228 main.go:141] libmachine: Creating VHD
	I0903 23:00:13.955470    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0903 23:00:17.582383    1228 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C6F9CF35-BAE2-447C-9334-441540916198
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0903 23:00:17.582633    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:17.582633    1228 main.go:141] libmachine: Writing magic tar header
	I0903 23:00:17.582713    1228 main.go:141] libmachine: Writing SSH key tar header
	I0903 23:00:17.597666    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0903 23:00:20.786085    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:20.786932    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:20.787028    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\disk.vhd' -SizeBytes 20000MB
	I0903 23:00:23.441578    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:23.441578    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:23.441578    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-270000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0903 23:00:27.038138    1228 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-270000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0903 23:00:27.038138    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:27.038900    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-270000-m02 -DynamicMemoryEnabled $false
	I0903 23:00:29.214762    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:29.215160    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:29.215160    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-270000-m02 -Count 2
	I0903 23:00:31.343129    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:31.343129    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:31.343895    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-270000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\boot2docker.iso'
	I0903 23:00:33.882312    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:33.883472    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:33.883536    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-270000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\disk.vhd'
	I0903 23:00:36.511924    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:36.511924    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:36.511924    1228 main.go:141] libmachine: Starting VM...
	I0903 23:00:36.511924    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-270000-m02
	I0903 23:00:39.604406    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:39.604406    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:39.604406    1228 main.go:141] libmachine: Waiting for host to start...
	I0903 23:00:39.605534    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:00:41.809752    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:00:41.809791    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:41.809875    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:00:44.263721    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:44.263721    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:45.264365    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:00:47.388052    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:00:47.388052    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:47.388853    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:00:49.908592    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:49.908592    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:50.910491    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:00:53.055638    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:00:53.055778    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:53.055866    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:00:55.506199    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:00:55.506199    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:56.506649    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:00:58.645358    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:00:58.645358    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:00:58.645568    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:01.111641    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:01:01.111641    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:02.112820    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:04.279882    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:04.280127    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:04.280127    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:06.946702    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:06.947149    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:06.947149    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:09.064463    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:09.064463    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:09.064463    1228 machine.go:93] provisionDockerMachine start ...
	I0903 23:01:09.065092    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:11.249147    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:11.249388    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:11.249449    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:13.780965    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:13.780965    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:13.787233    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:13.802486    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:13.802486    1228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:01:13.944458    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:01:13.944458    1228 buildroot.go:166] provisioning hostname "ha-270000-m02"
	I0903 23:01:13.944555    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:16.014848    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:16.015598    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:16.015728    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:18.480070    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:18.480070    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:18.486139    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:18.487034    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:18.487034    1228 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-270000-m02 && echo "ha-270000-m02" | sudo tee /etc/hostname
	I0903 23:01:18.653822    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-270000-m02
	
	I0903 23:01:18.653962    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:20.738564    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:20.739272    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:20.739272    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:23.309791    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:23.309898    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:23.316369    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:23.317085    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:23.317085    1228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-270000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-270000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-270000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:01:23.482022    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:01:23.482022    1228 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0903 23:01:23.482022    1228 buildroot.go:174] setting up certificates
	I0903 23:01:23.482022    1228 provision.go:84] configureAuth start
	I0903 23:01:23.482022    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:25.567525    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:25.568320    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:25.568320    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:28.109138    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:28.109138    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:28.109215    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:30.178520    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:30.178520    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:30.179071    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:32.682960    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:32.682960    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:32.683078    1228 provision.go:143] copyHostCerts
	I0903 23:01:32.683205    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0903 23:01:32.683540    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0903 23:01:32.683540    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0903 23:01:32.684090    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0903 23:01:32.685403    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0903 23:01:32.685890    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0903 23:01:32.685890    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0903 23:01:32.686412    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0903 23:01:32.687791    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0903 23:01:32.688071    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0903 23:01:32.688071    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0903 23:01:32.688554    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0903 23:01:32.689918    1228 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-270000-m02 san=[127.0.0.1 172.25.120.53 ha-270000-m02 localhost minikube]
	I0903 23:01:33.223764    1228 provision.go:177] copyRemoteCerts
	I0903 23:01:33.236539    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:01:33.236702    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:35.330576    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:35.330765    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:35.330886    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:37.884642    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:37.885754    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:37.886165    1228 sshutil.go:53] new ssh client: &{IP:172.25.120.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\id_rsa Username:docker}
	I0903 23:01:38.016251    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7796477s)
	I0903 23:01:38.016251    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0903 23:01:38.016845    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:01:38.083946    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0903 23:01:38.084108    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0903 23:01:38.150514    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0903 23:01:38.151048    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:01:38.216537    1228 provision.go:87] duration metric: took 14.7343143s to configureAuth
	I0903 23:01:38.216537    1228 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:01:38.216537    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:01:38.217449    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:40.284382    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:40.284382    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:40.284611    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:42.834889    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:42.834889    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:42.841328    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:42.841846    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:42.841846    1228 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0903 23:01:42.975282    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0903 23:01:42.975282    1228 buildroot.go:70] root file system type: tmpfs
	I0903 23:01:42.975469    1228 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0903 23:01:42.975693    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:45.033543    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:45.033543    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:45.033543    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:47.532694    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:47.532833    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:47.539314    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:47.539996    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:47.539996    1228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.25.116.52"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0903 23:01:47.722259    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.25.116.52
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0903 23:01:47.723589    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:49.829978    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:49.829978    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:49.831091    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:52.405256    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:52.406355    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:52.413533    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:01:52.414293    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:01:52.414293    1228 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0903 23:01:53.833755    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0903 23:01:53.833755    1228 machine.go:96] duration metric: took 44.7686825s to provisionDockerMachine
	I0903 23:01:53.833755    1228 client.go:171] duration metric: took 1m54.2645318s to LocalClient.Create
	I0903 23:01:53.833755    1228 start.go:167] duration metric: took 1m54.2645318s to libmachine.API.Create "ha-270000"
	I0903 23:01:53.833755    1228 start.go:293] postStartSetup for "ha-270000-m02" (driver="hyperv")
	I0903 23:01:53.833755    1228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:01:53.845728    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:01:53.845728    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:01:55.934260    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:01:55.934347    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:55.934347    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:01:58.385956    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:01:58.385956    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:01:58.387249    1228 sshutil.go:53] new ssh client: &{IP:172.25.120.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\id_rsa Username:docker}
	I0903 23:01:58.495931    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6491316s)
	I0903 23:01:58.508634    1228 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:01:58.516214    1228 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:01:58.516335    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0903 23:01:58.516489    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0903 23:01:58.518016    1228 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0903 23:01:58.518016    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0903 23:01:58.529751    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:01:58.550120    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0903 23:01:58.602436    1228 start.go:296] duration metric: took 4.7686159s for postStartSetup
	I0903 23:01:58.605183    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:00.669905    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:00.669905    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:00.669905    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:03.168966    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:03.169974    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:03.170221    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 23:02:03.172746    1228 start.go:128] duration metric: took 2m3.6060749s to createHost
	I0903 23:02:03.172746    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:05.235750    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:05.235750    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:05.235999    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:07.722483    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:07.723334    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:07.728834    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:02:07.729508    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:02:07.729508    1228 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:02:07.865315    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756940527.890251015
	
	I0903 23:02:07.865315    1228 fix.go:216] guest clock: 1756940527.890251015
	I0903 23:02:07.865315    1228 fix.go:229] Guest: 2025-09-03 23:02:07.890251015 +0000 UTC Remote: 2025-09-03 23:02:03.1727465 +0000 UTC m=+322.714665501 (delta=4.717504515s)
	I0903 23:02:07.865541    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:09.900597    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:09.900687    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:09.900794    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:12.419556    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:12.419883    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:12.425786    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:02:12.426693    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.120.53 22 <nil> <nil>}
	I0903 23:02:12.426693    1228 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756940527
	I0903 23:02:12.578947    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Sep  3 23:02:07 UTC 2025
	
	I0903 23:02:12.579084    1228 fix.go:236] clock set: Wed Sep  3 23:02:07 UTC 2025
	 (err=<nil>)
	I0903 23:02:12.579084    1228 start.go:83] releasing machines lock for "ha-270000-m02", held for 2m13.0122836s
	I0903 23:02:12.579294    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:14.641126    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:14.641126    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:14.641126    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:17.108916    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:17.109452    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:17.114623    1228 out.go:179] * Found network options:
	I0903 23:02:17.121169    1228 out.go:179]   - NO_PROXY=172.25.116.52
	W0903 23:02:17.125661    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	I0903 23:02:17.131986    1228 out.go:179]   - NO_PROXY=172.25.116.52
	W0903 23:02:17.136767    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	W0903 23:02:17.138489    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	I0903 23:02:17.141219    1228 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0903 23:02:17.141352    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:17.151382    1228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0903 23:02:17.151382    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m02 ).state
	I0903 23:02:19.268373    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:19.268373    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:19.268373    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:19.285622    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:19.285622    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:19.285880    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m02 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:21.871079    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:21.871892    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:21.872024    1228 sshutil.go:53] new ssh client: &{IP:172.25.120.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\id_rsa Username:docker}
	I0903 23:02:21.905051    1228 main.go:141] libmachine: [stdout =====>] : 172.25.120.53
	
	I0903 23:02:21.905661    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:21.906178    1228 sshutil.go:53] new ssh client: &{IP:172.25.120.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m02\id_rsa Username:docker}
	I0903 23:02:21.980733    1228 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8292848s)
	W0903 23:02:21.980818    1228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:02:21.994134    1228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:02:21.998622    1228 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8573369s)
	W0903 23:02:21.998622    1228 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0903 23:02:22.037084    1228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:02:22.037084    1228 start.go:495] detecting cgroup driver to use...
	I0903 23:02:22.037485    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:02:22.108232    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W0903 23:02:22.112524    1228 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0903 23:02:22.112524    1228 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0903 23:02:22.152392    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0903 23:02:22.180822    1228 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0903 23:02:22.194781    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0903 23:02:22.233790    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 23:02:22.271175    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0903 23:02:22.314751    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 23:02:22.356383    1228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:02:22.394152    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0903 23:02:22.434437    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0903 23:02:22.470960    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0903 23:02:22.512006    1228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:02:22.535563    1228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:02:22.548897    1228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:02:22.591913    1228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:02:22.627762    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:22.874257    1228 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0903 23:02:22.943307    1228 start.go:495] detecting cgroup driver to use...
	I0903 23:02:22.956751    1228 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0903 23:02:22.997536    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:02:23.036257    1228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:02:23.089575    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:02:23.147669    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 23:02:23.191923    1228 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0903 23:02:23.264858    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 23:02:23.290430    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:02:23.343996    1228 ssh_runner.go:195] Run: which cri-dockerd
	I0903 23:02:23.363909    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0903 23:02:23.387129    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0903 23:02:23.434981    1228 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0903 23:02:23.698368    1228 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0903 23:02:23.914214    1228 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0903 23:02:23.914272    1228 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0903 23:02:23.966599    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 23:02:24.002501    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:24.240165    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 23:02:24.407748    1228 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0903 23:02:24.446656    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:02:24.486595    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 23:02:24.531162    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:24.783527    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 23:02:25.830171    1228 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.04663s)
	I0903 23:02:25.830171    1228 retry.go:31] will retry after 730.544213ms: docker not running
	I0903 23:02:26.573593    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:02:26.612362    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0903 23:02:26.653409    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 23:02:26.694918    1228 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0903 23:02:26.919537    1228 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0903 23:02:27.150743    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:27.382354    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0903 23:02:27.450230    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0903 23:02:27.486316    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:27.716726    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0903 23:02:27.879001    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 23:02:27.904880    1228 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0903 23:02:27.916481    1228 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0903 23:02:27.925367    1228 start.go:563] Will wait 60s for crictl version
	I0903 23:02:27.937432    1228 ssh_runner.go:195] Run: which crictl
	I0903 23:02:27.959885    1228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:02:28.014104    1228 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0903 23:02:28.025671    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 23:02:28.084798    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 23:02:28.123308    1228 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0903 23:02:28.127043    1228 out.go:179]   - env NO_PROXY=172.25.116.52
	I0903 23:02:28.129459    1228 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0903 23:02:28.134066    1228 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0903 23:02:28.134066    1228 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0903 23:02:28.134066    1228 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0903 23:02:28.134066    1228 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0903 23:02:28.137043    1228 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0903 23:02:28.137043    1228 ip.go:218] interface addr: 172.25.112.1/20
	I0903 23:02:28.148693    1228 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0903 23:02:28.154833    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:02:28.178247    1228 mustload.go:65] Loading cluster: ha-270000
	I0903 23:02:28.179102    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:02:28.179799    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:02:30.176818    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:30.177914    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:30.177914    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 23:02:30.178703    1228 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000 for IP: 172.25.120.53
	I0903 23:02:30.178729    1228 certs.go:194] generating shared ca certs ...
	I0903 23:02:30.178729    1228 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:02:30.179604    1228 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0903 23:02:30.180045    1228 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0903 23:02:30.180212    1228 certs.go:256] generating profile certs ...
	I0903 23:02:30.180938    1228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.key
	I0903 23:02:30.180938    1228 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.b88c18f2
	I0903 23:02:30.180938    1228 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.b88c18f2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.116.52 172.25.120.53 172.25.127.254]
	I0903 23:02:30.395907    1228 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.b88c18f2 ...
	I0903 23:02:30.395907    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.b88c18f2: {Name:mk7aac0e6550922b9849977e7787842e204aef05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:02:30.397897    1228 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.b88c18f2 ...
	I0903 23:02:30.397897    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.b88c18f2: {Name:mk7047c75908bd73cad06091655137c8e83bc1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:02:30.398333    1228 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.b88c18f2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt
	I0903 23:02:30.415352    1228 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.b88c18f2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key
	I0903 23:02:30.417340    1228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key
	I0903 23:02:30.417340    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0903 23:02:30.417768    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0903 23:02:30.417768    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0903 23:02:30.417768    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0903 23:02:30.417768    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0903 23:02:30.418422    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0903 23:02:30.418422    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0903 23:02:30.418422    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0903 23:02:30.419238    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0903 23:02:30.419387    1228 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0903 23:02:30.419996    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0903 23:02:30.420294    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0903 23:02:30.420550    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0903 23:02:30.420773    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0903 23:02:30.421480    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0903 23:02:30.421480    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0903 23:02:30.421480    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0903 23:02:30.422018    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:02:30.422370    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:02:32.479252    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:32.479252    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:32.480109    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:34.968850    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 23:02:34.968850    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:34.969195    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 23:02:35.076314    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0903 23:02:35.085021    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0903 23:02:35.124246    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0903 23:02:35.132420    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0903 23:02:35.164678    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0903 23:02:35.173261    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0903 23:02:35.209197    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0903 23:02:35.215885    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0903 23:02:35.252342    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0903 23:02:35.259528    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0903 23:02:35.291980    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0903 23:02:35.298852    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0903 23:02:35.322093    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:02:35.378446    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:02:35.435474    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:02:35.490607    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0903 23:02:35.549811    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0903 23:02:35.606434    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:02:35.658899    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:02:35.708021    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:02:35.761504    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0903 23:02:35.810231    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0903 23:02:35.860095    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:02:35.912273    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0903 23:02:35.947580    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0903 23:02:35.983875    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0903 23:02:36.017419    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0903 23:02:36.053674    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0903 23:02:36.090219    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0903 23:02:36.129358    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0903 23:02:36.183062    1228 ssh_runner.go:195] Run: openssl version
	I0903 23:02:36.203475    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0903 23:02:36.239369    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0903 23:02:36.246639    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0903 23:02:36.258944    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0903 23:02:36.284375    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0903 23:02:36.324081    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0903 23:02:36.358547    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0903 23:02:36.365732    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0903 23:02:36.379894    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0903 23:02:36.402052    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:02:36.438811    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:02:36.476217    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:02:36.485906    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:02:36.498645    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:02:36.522003    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:02:36.559937    1228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:02:36.567668    1228 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:02:36.567668    1228 kubeadm.go:926] updating node {m02 172.25.120.53 8443 v1.34.0 docker true true} ...
	I0903 23:02:36.567668    1228 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-270000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.120.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:02:36.567668    1228 kube-vip.go:115] generating kube-vip config ...
	I0903 23:02:36.580100    1228 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0903 23:02:36.612349    1228 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0903 23:02:36.612349    1228 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0903 23:02:36.624845    1228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:02:36.645739    1228 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0': No such file or directory
	
	Initiating transfer...
	I0903 23:02:36.657760    1228 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0
	I0903 23:02:36.683565    1228 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet
	I0903 23:02:36.684508    1228 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm
	I0903 23:02:36.684508    1228 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl
	I0903 23:02:38.285454    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:02:38.315146    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet -> /var/lib/minikube/binaries/v1.34.0/kubelet
	I0903 23:02:38.327145    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet
	I0903 23:02:38.332137    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl -> /var/lib/minikube/binaries/v1.34.0/kubectl
	I0903 23:02:38.336136    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubelet': No such file or directory
	I0903 23:02:38.336136    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet --> /var/lib/minikube/binaries/v1.34.0/kubelet (59195684 bytes)
	I0903 23:02:38.344136    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl
	I0903 23:02:38.404363    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubectl': No such file or directory
	I0903 23:02:38.404363    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl --> /var/lib/minikube/binaries/v1.34.0/kubectl (60559544 bytes)
	I0903 23:02:38.522359    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm -> /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0903 23:02:38.534369    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0903 23:02:38.586706    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubeadm': No such file or directory
	I0903 23:02:38.586706    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0/kubeadm (74027192 bytes)
	I0903 23:02:39.706036    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0903 23:02:39.726158    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0903 23:02:39.760806    1228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:02:39.796905    1228 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0903 23:02:39.846006    1228 ssh_runner.go:195] Run: grep 172.25.127.254	control-plane.minikube.internal$ /etc/hosts
	I0903 23:02:39.852610    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:02:39.887503    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:02:40.130728    1228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:02:40.193877    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 23:02:40.194912    1228 start.go:317] joinCluster: &{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clust
erName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.120.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:02:40.195036    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0903 23:02:40.195215    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:02:42.243215    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:02:42.243215    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:42.243215    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 23:02:44.762246    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 23:02:44.762373    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:02:44.762881    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 23:02:44.973287    1228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7780989s)
	I0903 23:02:44.973430    1228 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.25.120.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:02:44.973492    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5qejmg.nad7xkhs0xgwmu9q --discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-270000-m02 --control-plane --apiserver-advertise-address=172.25.120.53 --apiserver-bind-port=8443"
	I0903 23:03:47.510383    1228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5qejmg.nad7xkhs0xgwmu9q --discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-270000-m02 --control-plane --apiserver-advertise-address=172.25.120.53 --apiserver-bind-port=8443": (1m2.5354671s)
	I0903 23:03:47.510383    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0903 23:03:48.183926    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-270000-m02 minikube.k8s.io/updated_at=2025_09_03T23_03_48_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb minikube.k8s.io/name=ha-270000 minikube.k8s.io/primary=false
	I0903 23:03:48.358296    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-270000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0903 23:03:48.522856    1228 start.go:319] duration metric: took 1m8.3270012s to joinCluster
	I0903 23:03:48.522856    1228 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.120.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:03:48.522856    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:03:48.525794    1228 out.go:179] * Verifying Kubernetes components...
	I0903 23:03:48.546149    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:03:48.832449    1228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:03:48.861524    1228 kapi.go:59] client config for ha-270000: &rest.Config{Host:"https://172.25.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0903 23:03:48.861524    1228 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.127.254:8443 with https://172.25.116.52:8443
	I0903 23:03:48.863381    1228 node_ready.go:35] waiting up to 6m0s for node "ha-270000-m02" to be "Ready" ...
	W0903 23:03:50.875715    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:03:53.370328    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:03:56.201778    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:03:58.371404    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:04:00.874342    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:04:03.370845    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:04:05.871117    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:04:08.370253    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	W0903 23:04:10.873593    1228 node_ready.go:57] node "ha-270000-m02" has "Ready":"False" status (will retry)
	I0903 23:04:13.369357    1228 node_ready.go:49] node "ha-270000-m02" is "Ready"
	I0903 23:04:13.369357    1228 node_ready.go:38] duration metric: took 24.5055398s for node "ha-270000-m02" to be "Ready" ...
	I0903 23:04:13.369357    1228 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:04:13.381776    1228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:04:13.418265    1228 api_server.go:72] duration metric: took 24.8950628s to wait for apiserver process to appear ...
	I0903 23:04:13.418343    1228 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:04:13.418343    1228 api_server.go:253] Checking apiserver healthz at https://172.25.116.52:8443/healthz ...
	I0903 23:04:13.427097    1228 api_server.go:279] https://172.25.116.52:8443/healthz returned 200:
	ok
	I0903 23:04:13.429087    1228 api_server.go:141] control plane version: v1.34.0
	I0903 23:04:13.429087    1228 api_server.go:131] duration metric: took 10.7443ms to wait for apiserver health ...
	I0903 23:04:13.429087    1228 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:04:13.448347    1228 system_pods.go:59] 17 kube-system pods found
	I0903 23:04:13.448409    1228 system_pods.go:61] "coredns-66bc5c9577-58qw9" [e4c3bec4-9c47-404e-98ff-21e0aee82931] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "coredns-66bc5c9577-cnk8d" [20226b19-1d13-4057-88c1-709997f24868] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "etcd-ha-270000" [bedaa6e6-7109-475b-b96e-34178b2a83e2] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "etcd-ha-270000-m02" [d123ed06-ba3b-4745-a419-0b7720e9e903] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kindnet-96trb" [32ea1443-99f0-4e56-99cb-d1ce43dbcb2f] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kindnet-vsgwr" [aa24d517-8c6d-4625-bd97-6f7fe1f7f72e] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-apiserver-ha-270000" [8b258bec-c81d-404f-b217-dccd40799d89] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-apiserver-ha-270000-m02" [16ba52a6-4dfc-487f-9bc9-65d94e1fffd8] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-controller-manager-ha-270000" [a695c6ed-2e2f-41ea-a250-9b01b1ae90af] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-controller-manager-ha-270000-m02" [f39fb141-4af3-4207-8f1c-1ce77b760861] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-proxy-qkts6" [8e651463-997a-4431-a14c-29557282565f] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-proxy-t96st" [f609fa93-da46-46a5-ba36-84c291da86a5] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-scheduler-ha-270000" [a257c6a6-4337-49fd-ba96-c6248221f207] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-scheduler-ha-270000-m02" [5c49ee66-b613-4b3c-9539-da558d1dd53a] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-vip-ha-270000" [4a489bea-b3e7-43bd-96e0-58c1480000a4] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "kube-vip-ha-270000-m02" [163cfde8-7488-49ac-b241-2509a7b01d1b] Running
	I0903 23:04:13.448409    1228 system_pods.go:61] "storage-provisioner" [7643327e-078c-45c9-9a32-cdf3b7a72986] Running
	I0903 23:04:13.448409    1228 system_pods.go:74] duration metric: took 19.3217ms to wait for pod list to return data ...
	I0903 23:04:13.448409    1228 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:04:13.454264    1228 default_sa.go:45] found service account: "default"
	I0903 23:04:13.454264    1228 default_sa.go:55] duration metric: took 5.8552ms for default service account to be created ...
	I0903 23:04:13.454264    1228 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:04:13.462247    1228 system_pods.go:86] 17 kube-system pods found
	I0903 23:04:13.462302    1228 system_pods.go:89] "coredns-66bc5c9577-58qw9" [e4c3bec4-9c47-404e-98ff-21e0aee82931] Running
	I0903 23:04:13.462302    1228 system_pods.go:89] "coredns-66bc5c9577-cnk8d" [20226b19-1d13-4057-88c1-709997f24868] Running
	I0903 23:04:13.462302    1228 system_pods.go:89] "etcd-ha-270000" [bedaa6e6-7109-475b-b96e-34178b2a83e2] Running
	I0903 23:04:13.462302    1228 system_pods.go:89] "etcd-ha-270000-m02" [d123ed06-ba3b-4745-a419-0b7720e9e903] Running
	I0903 23:04:13.462368    1228 system_pods.go:89] "kindnet-96trb" [32ea1443-99f0-4e56-99cb-d1ce43dbcb2f] Running
	I0903 23:04:13.462368    1228 system_pods.go:89] "kindnet-vsgwr" [aa24d517-8c6d-4625-bd97-6f7fe1f7f72e] Running
	I0903 23:04:13.462368    1228 system_pods.go:89] "kube-apiserver-ha-270000" [8b258bec-c81d-404f-b217-dccd40799d89] Running
	I0903 23:04:13.462368    1228 system_pods.go:89] "kube-apiserver-ha-270000-m02" [16ba52a6-4dfc-487f-9bc9-65d94e1fffd8] Running
	I0903 23:04:13.462425    1228 system_pods.go:89] "kube-controller-manager-ha-270000" [a695c6ed-2e2f-41ea-a250-9b01b1ae90af] Running
	I0903 23:04:13.462459    1228 system_pods.go:89] "kube-controller-manager-ha-270000-m02" [f39fb141-4af3-4207-8f1c-1ce77b760861] Running
	I0903 23:04:13.462459    1228 system_pods.go:89] "kube-proxy-qkts6" [8e651463-997a-4431-a14c-29557282565f] Running
	I0903 23:04:13.462488    1228 system_pods.go:89] "kube-proxy-t96st" [f609fa93-da46-46a5-ba36-84c291da86a5] Running
	I0903 23:04:13.462510    1228 system_pods.go:89] "kube-scheduler-ha-270000" [a257c6a6-4337-49fd-ba96-c6248221f207] Running
	I0903 23:04:13.462510    1228 system_pods.go:89] "kube-scheduler-ha-270000-m02" [5c49ee66-b613-4b3c-9539-da558d1dd53a] Running
	I0903 23:04:13.462510    1228 system_pods.go:89] "kube-vip-ha-270000" [4a489bea-b3e7-43bd-96e0-58c1480000a4] Running
	I0903 23:04:13.462510    1228 system_pods.go:89] "kube-vip-ha-270000-m02" [163cfde8-7488-49ac-b241-2509a7b01d1b] Running
	I0903 23:04:13.462510    1228 system_pods.go:89] "storage-provisioner" [7643327e-078c-45c9-9a32-cdf3b7a72986] Running
	I0903 23:04:13.462566    1228 system_pods.go:126] duration metric: took 8.2453ms to wait for k8s-apps to be running ...
	I0903 23:04:13.462566    1228 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:04:13.478377    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:04:13.509754    1228 system_svc.go:56] duration metric: took 47.1871ms WaitForService to wait for kubelet
	I0903 23:04:13.509906    1228 kubeadm.go:578] duration metric: took 24.986641s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:04:13.509906    1228 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:04:13.516151    1228 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:04:13.516151    1228 node_conditions.go:123] node cpu capacity is 2
	I0903 23:04:13.516151    1228 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:04:13.517172    1228 node_conditions.go:123] node cpu capacity is 2
	I0903 23:04:13.517172    1228 node_conditions.go:105] duration metric: took 7.2658ms to run NodePressure ...
	I0903 23:04:13.517172    1228 start.go:241] waiting for startup goroutines ...
	I0903 23:04:13.517172    1228 start.go:255] writing updated cluster config ...
	I0903 23:04:13.525149    1228 out.go:203] 
	I0903 23:04:13.536143    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:04:13.537142    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 23:04:13.543137    1228 out.go:179] * Starting "ha-270000-m03" control-plane node in "ha-270000" cluster
	I0903 23:04:13.546141    1228 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 23:04:13.546141    1228 cache.go:58] Caching tarball of preloaded images
	I0903 23:04:13.547140    1228 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0903 23:04:13.547140    1228 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0903 23:04:13.547140    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 23:04:13.552141    1228 start.go:360] acquireMachinesLock for ha-270000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:04:13.553161    1228 start.go:364] duration metric: took 1.0198ms to acquireMachinesLock for "ha-270000-m03"
	I0903 23:04:13.553161    1228 start.go:93] Provisioning new machine with config: &{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.120.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:04:13.553161    1228 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0903 23:04:13.556149    1228 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 23:04:13.556149    1228 start.go:159] libmachine.API.Create for "ha-270000" (driver="hyperv")
	I0903 23:04:13.556149    1228 client.go:168] LocalClient.Create starting
	I0903 23:04:13.557175    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0903 23:04:13.557175    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 23:04:13.557175    1228 main.go:141] libmachine: Parsing certificate...
	I0903 23:04:13.557175    1228 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0903 23:04:13.558142    1228 main.go:141] libmachine: Decoding PEM data...
	I0903 23:04:13.558142    1228 main.go:141] libmachine: Parsing certificate...
	I0903 23:04:13.558142    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0903 23:04:15.458396    1228 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0903 23:04:15.458396    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:15.458396    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0903 23:04:17.185387    1228 main.go:141] libmachine: [stdout =====>] : False
	
	I0903 23:04:17.185462    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:17.185462    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 23:04:18.680897    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 23:04:18.680897    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:18.680897    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 23:04:22.388806    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 23:04:22.388871    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:22.391054    1228 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 23:04:23.085163    1228 main.go:141] libmachine: Creating SSH key...
	I0903 23:04:23.461451    1228 main.go:141] libmachine: Creating VM...
	I0903 23:04:23.461451    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 23:04:26.351942    1228 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 23:04:26.351942    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:26.351942    1228 main.go:141] libmachine: Using switch "Default Switch"
	I0903 23:04:26.351942    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 23:04:28.196768    1228 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 23:04:28.196768    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:28.196768    1228 main.go:141] libmachine: Creating VHD
	I0903 23:04:28.197216    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0903 23:04:31.939199    1228 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : BA115B43-14A8-4C03-8065-3CE69285267E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0903 23:04:31.939620    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:31.939702    1228 main.go:141] libmachine: Writing magic tar header
	I0903 23:04:31.939702    1228 main.go:141] libmachine: Writing SSH key tar header
	I0903 23:04:31.952710    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0903 23:04:35.106317    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:35.107044    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:35.107044    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\disk.vhd' -SizeBytes 20000MB
	I0903 23:04:37.603159    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:37.603195    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:37.603258    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-270000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0903 23:04:41.277675    1228 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-270000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0903 23:04:41.277969    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:41.277969    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-270000-m03 -DynamicMemoryEnabled $false
	I0903 23:04:43.473805    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:43.473883    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:43.473945    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-270000-m03 -Count 2
	I0903 23:04:45.607596    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:45.607596    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:45.608052    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-270000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\boot2docker.iso'
	I0903 23:04:48.145327    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:48.145542    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:48.145542    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-270000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\disk.vhd'
	I0903 23:04:50.779285    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:50.779285    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:50.779285    1228 main.go:141] libmachine: Starting VM...
	I0903 23:04:50.779621    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-270000-m03
	I0903 23:04:53.888059    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:53.888316    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:53.891797    1228 main.go:141] libmachine: Waiting for host to start...
	I0903 23:04:53.892119    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:04:56.183737    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:04:56.183737    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:56.183737    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:04:58.765903    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:04:58.765903    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:04:59.767590    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:01.978550    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:01.978836    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:01.978836    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:04.506980    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:05:04.506980    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:05.507971    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:07.704073    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:07.704073    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:07.704073    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:10.264880    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:05:10.264880    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:11.265698    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:13.513097    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:13.513859    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:13.514043    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:16.035745    1228 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:05:16.036200    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:17.037160    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:19.211715    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:19.212181    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:19.212312    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:21.890196    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:21.890247    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:21.890295    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:24.081974    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:24.082401    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:24.082401    1228 machine.go:93] provisionDockerMachine start ...
	I0903 23:05:24.082478    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:26.306587    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:26.306587    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:26.306587    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:29.051419    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:29.052412    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:29.058240    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:29.059063    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:05:29.059063    1228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:05:29.209123    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:05:29.209123    1228 buildroot.go:166] provisioning hostname "ha-270000-m03"
	I0903 23:05:29.209232    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:31.344098    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:31.344098    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:31.344528    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:33.888473    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:33.888473    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:33.895696    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:33.895867    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:05:33.895867    1228 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-270000-m03 && echo "ha-270000-m03" | sudo tee /etc/hostname
	I0903 23:05:34.057614    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-270000-m03
	
	I0903 23:05:34.057746    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:36.122549    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:36.123434    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:36.123434    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:38.665402    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:38.666532    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:38.674788    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:38.674788    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:05:38.674788    1228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-270000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-270000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-270000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:05:38.826994    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:05:38.826994    1228 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0903 23:05:38.826994    1228 buildroot.go:174] setting up certificates
	I0903 23:05:38.826994    1228 provision.go:84] configureAuth start
	I0903 23:05:38.827837    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:40.926491    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:40.926580    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:40.926655    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:43.462512    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:43.462512    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:43.462605    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:45.554866    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:45.554866    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:45.555726    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:48.043996    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:48.044741    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:48.044741    1228 provision.go:143] copyHostCerts
	I0903 23:05:48.044972    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0903 23:05:48.045104    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0903 23:05:48.045104    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0903 23:05:48.045630    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0903 23:05:48.046882    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0903 23:05:48.046882    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0903 23:05:48.046882    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0903 23:05:48.047793    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0903 23:05:48.049153    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0903 23:05:48.049183    1228 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0903 23:05:48.049183    1228 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0903 23:05:48.049745    1228 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0903 23:05:48.050544    1228 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-270000-m03 san=[127.0.0.1 172.25.124.104 ha-270000-m03 localhost minikube]
	I0903 23:05:48.545736    1228 provision.go:177] copyRemoteCerts
	I0903 23:05:48.564660    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:05:48.564660    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:50.693480    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:50.693712    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:50.693975    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:53.183071    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:53.183245    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:53.183312    1228 sshutil.go:53] new ssh client: &{IP:172.25.124.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\id_rsa Username:docker}
	I0903 23:05:53.300960    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.736234s)
	I0903 23:05:53.300960    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0903 23:05:53.301777    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:05:53.358470    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0903 23:05:53.358470    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0903 23:05:53.417180    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0903 23:05:53.417373    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:05:53.470867    1228 provision.go:87] duration metric: took 14.6436682s to configureAuth
	I0903 23:05:53.470867    1228 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:05:53.471790    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:05:53.471943    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:05:55.539366    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:05:55.539885    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:55.539982    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:05:58.056561    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:05:58.056561    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:05:58.063313    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:58.064116    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:05:58.064116    1228 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0903 23:05:58.199787    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0903 23:05:58.199787    1228 buildroot.go:70] root file system type: tmpfs
	I0903 23:05:58.199787    1228 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0903 23:05:58.200385    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:00.313410    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:00.314421    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:00.314649    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:02.815102    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:02.815276    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:02.820306    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:06:02.821108    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:06:02.821108    1228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.25.116.52"
	Environment="NO_PROXY=172.25.116.52,172.25.120.53"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0903 23:06:02.993633    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.25.116.52
	Environment=NO_PROXY=172.25.116.52,172.25.120.53
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0903 23:06:02.993701    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:05.123743    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:05.123854    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:05.123959    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:07.647440    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:07.647440    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:07.654073    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:06:07.654594    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:06:07.654594    1228 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0903 23:06:09.115301    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0903 23:06:09.115367    1228 machine.go:96] duration metric: took 45.0323378s to provisionDockerMachine
	I0903 23:06:09.115426    1228 client.go:171] duration metric: took 1m55.5576678s to LocalClient.Create
	I0903 23:06:09.115426    1228 start.go:167] duration metric: took 1m55.5576678s to libmachine.API.Create "ha-270000"
	I0903 23:06:09.115426    1228 start.go:293] postStartSetup for "ha-270000-m03" (driver="hyperv")
	I0903 23:06:09.115513    1228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:06:09.129223    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:06:09.129223    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:11.253419    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:11.253419    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:11.254137    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:13.800457    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:13.800457    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:13.801548    1228 sshutil.go:53] new ssh client: &{IP:172.25.124.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\id_rsa Username:docker}
	I0903 23:06:13.907399    1228 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7781095s)
	I0903 23:06:13.920598    1228 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:06:13.928728    1228 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:06:13.928728    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0903 23:06:13.929350    1228 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0903 23:06:13.930876    1228 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0903 23:06:13.930876    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0903 23:06:13.944298    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:06:13.965627    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0903 23:06:14.022524    1228 start.go:296] duration metric: took 4.9069426s for postStartSetup
	I0903 23:06:14.025110    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:16.118829    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:16.118829    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:16.119880    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:18.669287    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:18.669287    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:18.669575    1228 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\config.json ...
	I0903 23:06:18.672473    1228 start.go:128] duration metric: took 2m5.1175697s to createHost
	I0903 23:06:18.672473    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:20.816404    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:20.816923    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:20.816923    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:23.469795    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:23.469795    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:23.477301    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:06:23.477917    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:06:23.477917    1228 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:06:23.635065    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756940783.657838700
	
	I0903 23:06:23.635065    1228 fix.go:216] guest clock: 1756940783.657838700
	I0903 23:06:23.635065    1228 fix.go:229] Guest: 2025-09-03 23:06:23.6578387 +0000 UTC Remote: 2025-09-03 23:06:18.6724738 +0000 UTC m=+578.210851701 (delta=4.9853649s)
	I0903 23:06:23.635227    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:25.740790    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:25.740790    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:25.741084    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:28.303932    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:28.303932    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:28.310038    1228 main.go:141] libmachine: Using SSH client type: native
	I0903 23:06:28.310528    1228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.124.104 22 <nil> <nil>}
	I0903 23:06:28.310528    1228 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756940783
	I0903 23:06:28.465023    1228 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Sep  3 23:06:23 UTC 2025
	
	I0903 23:06:28.465023    1228 fix.go:236] clock set: Wed Sep  3 23:06:23 UTC 2025
	 (err=<nil>)
	I0903 23:06:28.465023    1228 start.go:83] releasing machines lock for "ha-270000-m03", held for 2m14.9099825s
	I0903 23:06:28.465023    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:30.544444    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:30.545209    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:30.545209    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:33.073920    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:33.074930    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:33.078955    1228 out.go:179] * Found network options:
	I0903 23:06:33.081858    1228 out.go:179]   - NO_PROXY=172.25.116.52,172.25.120.53
	W0903 23:06:33.084584    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	W0903 23:06:33.084584    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	I0903 23:06:33.087373    1228 out.go:179]   - NO_PROXY=172.25.116.52,172.25.120.53
	W0903 23:06:33.090426    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	W0903 23:06:33.090426    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	W0903 23:06:33.092544    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	W0903 23:06:33.092544    1228 proxy.go:120] fail to check proxy env: Error ip not in block
	I0903 23:06:33.094513    1228 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0903 23:06:33.094513    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:33.110530    1228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0903 23:06:33.110530    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000-m03 ).state
	I0903 23:06:35.293515    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:35.294361    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:35.294417    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:35.312948    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:35.313158    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:35.313236    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000-m03 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:38.046907    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:38.047112    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:38.047623    1228 sshutil.go:53] new ssh client: &{IP:172.25.124.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\id_rsa Username:docker}
	I0903 23:06:38.075069    1228 main.go:141] libmachine: [stdout =====>] : 172.25.124.104
	
	I0903 23:06:38.075069    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:38.075963    1228 sshutil.go:53] new ssh client: &{IP:172.25.124.104 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000-m03\id_rsa Username:docker}
	I0903 23:06:38.149275    1228 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0386746s)
	W0903 23:06:38.149404    1228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:06:38.164748    1228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:06:38.171734    1228 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0771498s)
	W0903 23:06:38.171801    1228 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0903 23:06:38.207707    1228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:06:38.207707    1228 start.go:495] detecting cgroup driver to use...
	I0903 23:06:38.208062    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:06:38.265557    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0903 23:06:38.302455    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0903 23:06:38.331350    1228 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0903 23:06:38.343619    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W0903 23:06:38.359070    1228 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0903 23:06:38.359132    1228 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0903 23:06:38.384465    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 23:06:38.423162    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0903 23:06:38.458408    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 23:06:38.493305    1228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:06:38.531230    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0903 23:06:38.565909    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0903 23:06:38.600043    1228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0903 23:06:38.634767    1228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:06:38.652397    1228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:06:38.664265    1228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:06:38.699141    1228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:06:38.730070    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:38.961397    1228 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0903 23:06:39.022475    1228 start.go:495] detecting cgroup driver to use...
	I0903 23:06:39.036942    1228 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0903 23:06:39.077336    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:06:39.116912    1228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:06:39.161254    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:06:39.199212    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 23:06:39.238342    1228 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0903 23:06:39.315341    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 23:06:39.344072    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:06:39.397297    1228 ssh_runner.go:195] Run: which cri-dockerd
	I0903 23:06:39.418001    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0903 23:06:39.440310    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0903 23:06:39.491719    1228 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0903 23:06:39.729269    1228 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0903 23:06:39.947686    1228 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0903 23:06:39.947777    1228 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0903 23:06:40.001627    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 23:06:40.038287    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:40.274902    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 23:06:40.456412    1228 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0903 23:06:40.495410    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:06:40.533384    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 23:06:40.580404    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:40.856229    1228 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 23:06:41.921612    1228 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0652355s)
	I0903 23:06:41.921612    1228 retry.go:31] will retry after 728.305379ms: docker not running
	I0903 23:06:42.664531    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:06:42.703526    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0903 23:06:42.742133    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 23:06:42.777734    1228 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0903 23:06:43.015908    1228 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0903 23:06:43.272453    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:43.519851    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0903 23:06:43.585328    1228 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0903 23:06:43.621896    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:43.859063    1228 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0903 23:06:44.024038    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 23:06:44.056214    1228 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0903 23:06:44.069408    1228 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0903 23:06:44.079721    1228 start.go:563] Will wait 60s for crictl version
	I0903 23:06:44.090863    1228 ssh_runner.go:195] Run: which crictl
	I0903 23:06:44.111500    1228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:06:44.169262    1228 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0903 23:06:44.180309    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 23:06:44.225411    1228 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 23:06:44.259030    1228 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0903 23:06:44.263553    1228 out.go:179]   - env NO_PROXY=172.25.116.52
	I0903 23:06:44.267275    1228 out.go:179]   - env NO_PROXY=172.25.116.52,172.25.120.53
	I0903 23:06:44.269240    1228 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0903 23:06:44.273811    1228 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0903 23:06:44.273811    1228 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0903 23:06:44.273811    1228 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0903 23:06:44.273811    1228 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0903 23:06:44.276853    1228 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0903 23:06:44.276853    1228 ip.go:218] interface addr: 172.25.112.1/20
	I0903 23:06:44.286865    1228 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0903 23:06:44.294965    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:06:44.329518    1228 mustload.go:65] Loading cluster: ha-270000
	I0903 23:06:44.330415    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:06:44.330608    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:06:46.385558    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:46.386143    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:46.386211    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 23:06:46.387029    1228 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000 for IP: 172.25.124.104
	I0903 23:06:46.387029    1228 certs.go:194] generating shared ca certs ...
	I0903 23:06:46.387103    1228 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:06:46.387843    1228 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0903 23:06:46.388159    1228 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0903 23:06:46.388454    1228 certs.go:256] generating profile certs ...
	I0903 23:06:46.389339    1228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\client.key
	I0903 23:06:46.389527    1228 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.df715e79
	I0903 23:06:46.389629    1228 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.df715e79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.116.52 172.25.120.53 172.25.124.104 172.25.127.254]
	I0903 23:06:46.513919    1228 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.df715e79 ...
	I0903 23:06:46.513919    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.df715e79: {Name:mk94aec58ef12e28df00a53b1ba486364e2a26de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:06:46.514917    1228 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.df715e79 ...
	I0903 23:06:46.514917    1228 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.df715e79: {Name:mke5f3cdb87dd957c6b68c229eb55ba6edd3a6bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:06:46.515919    1228 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt.df715e79 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt
	I0903 23:06:46.534781    1228 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key.df715e79 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key
	I0903 23:06:46.535675    1228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key
	I0903 23:06:46.535675    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0903 23:06:46.536728    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0903 23:06:46.536728    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0903 23:06:46.536728    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0903 23:06:46.536728    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0903 23:06:46.537374    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0903 23:06:46.537636    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0903 23:06:46.537808    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0903 23:06:46.538008    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0903 23:06:46.538008    1228 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0903 23:06:46.538688    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0903 23:06:46.539248    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0903 23:06:46.539943    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0903 23:06:46.540572    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0903 23:06:46.540805    1228 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0903 23:06:46.541562    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0903 23:06:46.541867    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:06:46.542052    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0903 23:06:46.542253    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:06:48.607056    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:48.607056    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:48.607118    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:51.177294    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 23:06:51.177294    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:51.177422    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 23:06:51.282087    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0903 23:06:51.290105    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0903 23:06:51.333795    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0903 23:06:51.340621    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0903 23:06:51.378055    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0903 23:06:51.386883    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0903 23:06:51.423610    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0903 23:06:51.431448    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0903 23:06:51.471331    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0903 23:06:51.478567    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0903 23:06:51.514108    1228 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0903 23:06:51.524914    1228 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0903 23:06:51.553631    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:06:51.606429    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:06:51.667391    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:06:51.722169    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0903 23:06:51.774885    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0903 23:06:51.833011    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:06:51.887267    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:06:51.945618    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-270000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:06:52.001306    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0903 23:06:52.070756    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:06:52.132509    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0903 23:06:52.186367    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0903 23:06:52.227505    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0903 23:06:52.265085    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0903 23:06:52.302624    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0903 23:06:52.349532    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0903 23:06:52.412534    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0903 23:06:52.455565    1228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0903 23:06:52.518835    1228 ssh_runner.go:195] Run: openssl version
	I0903 23:06:52.543426    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:06:52.589057    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:06:52.597735    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:06:52.610576    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:06:52.643456    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:06:52.688338    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0903 23:06:52.733583    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0903 23:06:52.741947    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0903 23:06:52.754006    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0903 23:06:52.792356    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0903 23:06:52.832776    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0903 23:06:52.869746    1228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0903 23:06:52.877760    1228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0903 23:06:52.888883    1228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0903 23:06:52.922649    1228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:06:52.957158    1228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:06:52.964865    1228 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:06:52.965197    1228 kubeadm.go:926] updating node {m03 172.25.124.104 8443 v1.34.0 docker true true} ...
	I0903 23:06:52.965272    1228 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-270000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.124.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:06:52.965490    1228 kube-vip.go:115] generating kube-vip config ...
	I0903 23:06:52.977405    1228 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0903 23:06:53.008106    1228 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0903 23:06:53.008254    1228 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0903 23:06:53.022735    1228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:06:53.043509    1228 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0': No such file or directory
	
	Initiating transfer...
	I0903 23:06:53.056539    1228 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0
	I0903 23:06:53.077863    1228 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
	I0903 23:06:53.077915    1228 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet.sha256
	I0903 23:06:53.078032    1228 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm.sha256
	I0903 23:06:53.078032    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl -> /var/lib/minikube/binaries/v1.34.0/kubectl
	I0903 23:06:53.078128    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm -> /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0903 23:06:53.092009    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:06:53.092715    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl
	I0903 23:06:53.093732    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0903 23:06:53.122370    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubectl': No such file or directory
	I0903 23:06:53.122464    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubeadm': No such file or directory
	I0903 23:06:53.122464    1228 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet -> /var/lib/minikube/binaries/v1.34.0/kubelet
	I0903 23:06:53.122464    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl --> /var/lib/minikube/binaries/v1.34.0/kubectl (60559544 bytes)
	I0903 23:06:53.122464    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0/kubeadm (74027192 bytes)
	I0903 23:06:53.137325    1228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet
	I0903 23:06:53.236721    1228 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubelet': No such file or directory
	I0903 23:06:53.236721    1228 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet --> /var/lib/minikube/binaries/v1.34.0/kubelet (59195684 bytes)
	I0903 23:06:54.426280    1228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0903 23:06:54.448207    1228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0903 23:06:54.486880    1228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:06:54.537189    1228 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0903 23:06:54.604187    1228 ssh_runner.go:195] Run: grep 172.25.127.254	control-plane.minikube.internal$ /etc/hosts
	I0903 23:06:54.611011    1228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:06:54.652270    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:06:54.906121    1228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:06:54.961940    1228 host.go:66] Checking if "ha-270000" exists ...
	I0903 23:06:54.962919    1228 start.go:317] joinCluster: &{Name:ha-270000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clust
erName:ha-270000 Namespace:default APIServerHAVIP:172.25.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.116.52 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.120.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.25.124.104 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:06:54.962919    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0903 23:06:54.962919    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-270000 ).state
	I0903 23:06:57.082253    1228 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:06:57.082253    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:57.082253    1228 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-270000 ).networkadapters[0]).ipaddresses[0]
	I0903 23:06:59.595292    1228 main.go:141] libmachine: [stdout =====>] : 172.25.116.52
	
	I0903 23:06:59.595292    1228 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:06:59.596933    1228 sshutil.go:53] new ssh client: &{IP:172.25.116.52 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-270000\id_rsa Username:docker}
	I0903 23:07:00.006485    1228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0434954s)
	I0903 23:07:00.006605    1228 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.25.124.104 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:07:00.006741    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n21ykb.j8r73csmrpokwkyp --discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-270000-m03 --control-plane --apiserver-advertise-address=172.25.124.104 --apiserver-bind-port=8443"
	I0903 23:07:53.523560    1228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n21ykb.j8r73csmrpokwkyp --discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-270000-m03 --control-plane --apiserver-advertise-address=172.25.124.104 --apiserver-bind-port=8443": (53.5160322s)
	I0903 23:07:53.523560    1228 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0903 23:07:54.245547    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-270000-m03 minikube.k8s.io/updated_at=2025_09_03T23_07_54_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb minikube.k8s.io/name=ha-270000 minikube.k8s.io/primary=false
	I0903 23:07:54.413010    1228 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-270000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0903 23:07:54.563477    1228 start.go:319] duration metric: took 59.5997831s to joinCluster
	I0903 23:07:54.563477    1228 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.25.124.104 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:07:54.563477    1228 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:07:54.567298    1228 out.go:179] * Verifying Kubernetes components...
	I0903 23:07:54.582502    1228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:07:54.887171    1228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:07:54.922973    1228 kapi.go:59] client config for ha-270000: &rest.Config{Host:"https://172.25.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0903 23:07:54.922973    1228 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.127.254:8443 with https://172.25.116.52:8443
	I0903 23:07:54.924241    1228 node_ready.go:35] waiting up to 6m0s for node "ha-270000-m03" to be "Ready" ...
	W0903 23:07:56.967712    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:07:59.430841    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:01.431397    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:03.431596    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:05.433719    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:07.435053    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:09.930979    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:11.932344    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:14.439563    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:16.931256    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:19.431758    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:21.931336    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	W0903 23:08:23.931845    1228 node_ready.go:57] node "ha-270000-m03" has "Ready":"False" status (will retry)
	I0903 23:08:24.432339    1228 node_ready.go:49] node "ha-270000-m03" is "Ready"
	I0903 23:08:24.432339    1228 node_ready.go:38] duration metric: took 29.5076881s for node "ha-270000-m03" to be "Ready" ...
	I0903 23:08:24.432437    1228 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:08:24.444295    1228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:08:24.482653    1228 api_server.go:72] duration metric: took 29.9187608s to wait for apiserver process to appear ...
	I0903 23:08:24.482704    1228 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:08:24.482770    1228 api_server.go:253] Checking apiserver healthz at https://172.25.116.52:8443/healthz ...
	I0903 23:08:24.492894    1228 api_server.go:279] https://172.25.116.52:8443/healthz returned 200:
	ok
	I0903 23:08:24.496626    1228 api_server.go:141] control plane version: v1.34.0
	I0903 23:08:24.496626    1228 api_server.go:131] duration metric: took 13.9216ms to wait for apiserver health ...
	I0903 23:08:24.496626    1228 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:08:24.508610    1228 system_pods.go:59] 24 kube-system pods found
	I0903 23:08:24.508681    1228 system_pods.go:61] "coredns-66bc5c9577-58qw9" [e4c3bec4-9c47-404e-98ff-21e0aee82931] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "coredns-66bc5c9577-cnk8d" [20226b19-1d13-4057-88c1-709997f24868] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "etcd-ha-270000" [bedaa6e6-7109-475b-b96e-34178b2a83e2] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "etcd-ha-270000-m02" [d123ed06-ba3b-4745-a419-0b7720e9e903] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "etcd-ha-270000-m03" [5684b0cc-afb5-415c-9a8d-452523531995] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "kindnet-96trb" [32ea1443-99f0-4e56-99cb-d1ce43dbcb2f] Running
	I0903 23:08:24.508681    1228 system_pods.go:61] "kindnet-vsgwr" [aa24d517-8c6d-4625-bd97-6f7fe1f7f72e] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kindnet-wqmlt" [230736de-aaf5-4c9c-9af9-6a4bcc572547] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-apiserver-ha-270000" [8b258bec-c81d-404f-b217-dccd40799d89] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-apiserver-ha-270000-m02" [16ba52a6-4dfc-487f-9bc9-65d94e1fffd8] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-apiserver-ha-270000-m03" [30239ff2-f7a0-4a91-920c-058ee37aee79] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-controller-manager-ha-270000" [a695c6ed-2e2f-41ea-a250-9b01b1ae90af] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-controller-manager-ha-270000-m02" [f39fb141-4af3-4207-8f1c-1ce77b760861] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-controller-manager-ha-270000-m03" [c18582aa-1ead-4403-a412-1cc46100151b] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-proxy-cb8z2" [1b8a13fe-f029-42c2-9241-18cc0213dce2] Running
	I0903 23:08:24.508752    1228 system_pods.go:61] "kube-proxy-qkts6" [8e651463-997a-4431-a14c-29557282565f] Running
	I0903 23:08:24.508862    1228 system_pods.go:61] "kube-proxy-t96st" [f609fa93-da46-46a5-ba36-84c291da86a5] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-scheduler-ha-270000" [a257c6a6-4337-49fd-ba96-c6248221f207] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-scheduler-ha-270000-m02" [5c49ee66-b613-4b3c-9539-da558d1dd53a] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-scheduler-ha-270000-m03" [061cecf5-9818-4f99-b6d2-603759814139] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-vip-ha-270000" [4a489bea-b3e7-43bd-96e0-58c1480000a4] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-vip-ha-270000-m02" [163cfde8-7488-49ac-b241-2509a7b01d1b] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "kube-vip-ha-270000-m03" [66b497a0-35c9-470b-b263-bb25c762b83e] Running
	I0903 23:08:24.508914    1228 system_pods.go:61] "storage-provisioner" [7643327e-078c-45c9-9a32-cdf3b7a72986] Running
	I0903 23:08:24.508988    1228 system_pods.go:74] duration metric: took 12.3617ms to wait for pod list to return data ...
	I0903 23:08:24.508988    1228 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:08:24.515504    1228 default_sa.go:45] found service account: "default"
	I0903 23:08:24.515504    1228 default_sa.go:55] duration metric: took 6.5162ms for default service account to be created ...
	I0903 23:08:24.515504    1228 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:08:24.541213    1228 system_pods.go:86] 24 kube-system pods found
	I0903 23:08:24.541281    1228 system_pods.go:89] "coredns-66bc5c9577-58qw9" [e4c3bec4-9c47-404e-98ff-21e0aee82931] Running
	I0903 23:08:24.541281    1228 system_pods.go:89] "coredns-66bc5c9577-cnk8d" [20226b19-1d13-4057-88c1-709997f24868] Running
	I0903 23:08:24.541281    1228 system_pods.go:89] "etcd-ha-270000" [bedaa6e6-7109-475b-b96e-34178b2a83e2] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "etcd-ha-270000-m02" [d123ed06-ba3b-4745-a419-0b7720e9e903] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "etcd-ha-270000-m03" [5684b0cc-afb5-415c-9a8d-452523531995] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kindnet-96trb" [32ea1443-99f0-4e56-99cb-d1ce43dbcb2f] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kindnet-vsgwr" [aa24d517-8c6d-4625-bd97-6f7fe1f7f72e] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kindnet-wqmlt" [230736de-aaf5-4c9c-9af9-6a4bcc572547] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kube-apiserver-ha-270000" [8b258bec-c81d-404f-b217-dccd40799d89] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kube-apiserver-ha-270000-m02" [16ba52a6-4dfc-487f-9bc9-65d94e1fffd8] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kube-apiserver-ha-270000-m03" [30239ff2-f7a0-4a91-920c-058ee37aee79] Running
	I0903 23:08:24.541366    1228 system_pods.go:89] "kube-controller-manager-ha-270000" [a695c6ed-2e2f-41ea-a250-9b01b1ae90af] Running
	I0903 23:08:24.541450    1228 system_pods.go:89] "kube-controller-manager-ha-270000-m02" [f39fb141-4af3-4207-8f1c-1ce77b760861] Running
	I0903 23:08:24.541481    1228 system_pods.go:89] "kube-controller-manager-ha-270000-m03" [c18582aa-1ead-4403-a412-1cc46100151b] Running
	I0903 23:08:24.541481    1228 system_pods.go:89] "kube-proxy-cb8z2" [1b8a13fe-f029-42c2-9241-18cc0213dce2] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-proxy-qkts6" [8e651463-997a-4431-a14c-29557282565f] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-proxy-t96st" [f609fa93-da46-46a5-ba36-84c291da86a5] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-scheduler-ha-270000" [a257c6a6-4337-49fd-ba96-c6248221f207] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-scheduler-ha-270000-m02" [5c49ee66-b613-4b3c-9539-da558d1dd53a] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-scheduler-ha-270000-m03" [061cecf5-9818-4f99-b6d2-603759814139] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-vip-ha-270000" [4a489bea-b3e7-43bd-96e0-58c1480000a4] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-vip-ha-270000-m02" [163cfde8-7488-49ac-b241-2509a7b01d1b] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "kube-vip-ha-270000-m03" [66b497a0-35c9-470b-b263-bb25c762b83e] Running
	I0903 23:08:24.541507    1228 system_pods.go:89] "storage-provisioner" [7643327e-078c-45c9-9a32-cdf3b7a72986] Running
	I0903 23:08:24.541507    1228 system_pods.go:126] duration metric: took 26.003ms to wait for k8s-apps to be running ...
	I0903 23:08:24.541507    1228 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:08:24.552989    1228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:08:24.587108    1228 system_svc.go:56] duration metric: took 45.5997ms WaitForService to wait for kubelet
	I0903 23:08:24.587108    1228 kubeadm.go:578] duration metric: took 30.0232135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:08:24.587297    1228 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:08:24.595438    1228 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:08:24.595438    1228 node_conditions.go:123] node cpu capacity is 2
	I0903 23:08:24.595438    1228 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:08:24.595438    1228 node_conditions.go:123] node cpu capacity is 2
	I0903 23:08:24.595438    1228 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:08:24.595438    1228 node_conditions.go:123] node cpu capacity is 2
	I0903 23:08:24.595438    1228 node_conditions.go:105] duration metric: took 8.1409ms to run NodePressure ...
	I0903 23:08:24.595438    1228 start.go:241] waiting for startup goroutines ...
	I0903 23:08:24.596015    1228 start.go:255] writing updated cluster config ...
	I0903 23:08:24.609564    1228 ssh_runner.go:195] Run: rm -f paused
	I0903 23:08:24.617784    1228 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:08:24.619208    1228 kapi.go:59] client config for ha-270000: &rest.Config{Host:"https://172.25.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-270000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0903 23:08:24.641304    1228 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-58qw9" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.652199    1228 pod_ready.go:94] pod "coredns-66bc5c9577-58qw9" is "Ready"
	I0903 23:08:24.652199    1228 pod_ready.go:86] duration metric: took 10.8957ms for pod "coredns-66bc5c9577-58qw9" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.652199    1228 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cnk8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.662823    1228 pod_ready.go:94] pod "coredns-66bc5c9577-cnk8d" is "Ready"
	I0903 23:08:24.662892    1228 pod_ready.go:86] duration metric: took 10.6233ms for pod "coredns-66bc5c9577-cnk8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.670153    1228 pod_ready.go:83] waiting for pod "etcd-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.680168    1228 pod_ready.go:94] pod "etcd-ha-270000" is "Ready"
	I0903 23:08:24.680168    1228 pod_ready.go:86] duration metric: took 10.015ms for pod "etcd-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.680168    1228 pod_ready.go:83] waiting for pod "etcd-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.688832    1228 pod_ready.go:94] pod "etcd-ha-270000-m02" is "Ready"
	I0903 23:08:24.688832    1228 pod_ready.go:86] duration metric: took 8.6637ms for pod "etcd-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.688832    1228 pod_ready.go:83] waiting for pod "etcd-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:24.821448    1228 request.go:683] "Waited before sending request" delay="132.6148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-270000-m03"
	I0903 23:08:25.021242    1228 request.go:683] "Waited before sending request" delay="193.6485ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m03"
	I0903 23:08:25.031181    1228 pod_ready.go:94] pod "etcd-ha-270000-m03" is "Ready"
	I0903 23:08:25.031240    1228 pod_ready.go:86] duration metric: took 342.4039ms for pod "etcd-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:25.221160    1228 request.go:683] "Waited before sending request" delay="189.8598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0903 23:08:25.229439    1228 pod_ready.go:83] waiting for pod "kube-apiserver-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:25.420782    1228 request.go:683] "Waited before sending request" delay="191.2094ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-270000"
	I0903 23:08:25.620766    1228 request.go:683] "Waited before sending request" delay="193.9519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000"
	I0903 23:08:25.627063    1228 pod_ready.go:94] pod "kube-apiserver-ha-270000" is "Ready"
	I0903 23:08:25.627063    1228 pod_ready.go:86] duration metric: took 397.5425ms for pod "kube-apiserver-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:25.627676    1228 pod_ready.go:83] waiting for pod "kube-apiserver-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:25.820766    1228 request.go:683] "Waited before sending request" delay="192.6908ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-270000-m02"
	I0903 23:08:26.021536    1228 request.go:683] "Waited before sending request" delay="189.2531ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m02"
	I0903 23:08:26.027132    1228 pod_ready.go:94] pod "kube-apiserver-ha-270000-m02" is "Ready"
	I0903 23:08:26.027132    1228 pod_ready.go:86] duration metric: took 399.1179ms for pod "kube-apiserver-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:26.027132    1228 pod_ready.go:83] waiting for pod "kube-apiserver-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:26.221246    1228 request.go:683] "Waited before sending request" delay="194.1117ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-270000-m03"
	I0903 23:08:26.420827    1228 request.go:683] "Waited before sending request" delay="192.3747ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m03"
	I0903 23:08:26.427893    1228 pod_ready.go:94] pod "kube-apiserver-ha-270000-m03" is "Ready"
	I0903 23:08:26.427946    1228 pod_ready.go:86] duration metric: took 400.8091ms for pod "kube-apiserver-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:26.621410    1228 request.go:683] "Waited before sending request" delay="193.3496ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0903 23:08:26.632244    1228 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:26.820890    1228 request.go:683] "Waited before sending request" delay="188.5357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-270000"
	I0903 23:08:27.021312    1228 request.go:683] "Waited before sending request" delay="193.81ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000"
	I0903 23:08:27.027646    1228 pod_ready.go:94] pod "kube-controller-manager-ha-270000" is "Ready"
	I0903 23:08:27.027646    1228 pod_ready.go:86] duration metric: took 395.3431ms for pod "kube-controller-manager-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:27.027646    1228 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:27.221499    1228 request.go:683] "Waited before sending request" delay="193.8499ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-270000-m02"
	I0903 23:08:27.421447    1228 request.go:683] "Waited before sending request" delay="193.692ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m02"
	I0903 23:08:27.426969    1228 pod_ready.go:94] pod "kube-controller-manager-ha-270000-m02" is "Ready"
	I0903 23:08:27.427059    1228 pod_ready.go:86] duration metric: took 399.4068ms for pod "kube-controller-manager-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:27.427059    1228 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:27.621139    1228 request.go:683] "Waited before sending request" delay="194.0775ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-270000-m03"
	I0903 23:08:27.821591    1228 request.go:683] "Waited before sending request" delay="192.9871ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m03"
	I0903 23:08:27.828155    1228 pod_ready.go:94] pod "kube-controller-manager-ha-270000-m03" is "Ready"
	I0903 23:08:27.828155    1228 pod_ready.go:86] duration metric: took 401.0908ms for pod "kube-controller-manager-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:28.020724    1228 request.go:683] "Waited before sending request" delay="191.839ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0903 23:08:28.029278    1228 pod_ready.go:83] waiting for pod "kube-proxy-cb8z2" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:28.221184    1228 request.go:683] "Waited before sending request" delay="191.9039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cb8z2"
	I0903 23:08:28.420703    1228 request.go:683] "Waited before sending request" delay="193.5347ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m03"
	I0903 23:08:28.426633    1228 pod_ready.go:94] pod "kube-proxy-cb8z2" is "Ready"
	I0903 23:08:28.427169    1228 pod_ready.go:86] duration metric: took 397.8862ms for pod "kube-proxy-cb8z2" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:28.427169    1228 pod_ready.go:83] waiting for pod "kube-proxy-qkts6" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:28.620674    1228 request.go:683] "Waited before sending request" delay="193.3011ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qkts6"
	I0903 23:08:28.821157    1228 request.go:683] "Waited before sending request" delay="194.5718ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m02"
	I0903 23:08:28.828052    1228 pod_ready.go:94] pod "kube-proxy-qkts6" is "Ready"
	I0903 23:08:28.828052    1228 pod_ready.go:86] duration metric: took 400.8773ms for pod "kube-proxy-qkts6" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:28.828052    1228 pod_ready.go:83] waiting for pod "kube-proxy-t96st" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:29.021450    1228 request.go:683] "Waited before sending request" delay="193.1719ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t96st"
	I0903 23:08:29.222419    1228 request.go:683] "Waited before sending request" delay="193.3362ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000"
	I0903 23:08:29.227960    1228 pod_ready.go:94] pod "kube-proxy-t96st" is "Ready"
	I0903 23:08:29.227960    1228 pod_ready.go:86] duration metric: took 399.9026ms for pod "kube-proxy-t96st" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:29.421259    1228 request.go:683] "Waited before sending request" delay="193.1318ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0903 23:08:29.430996    1228 pod_ready.go:83] waiting for pod "kube-scheduler-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:29.621651    1228 request.go:683] "Waited before sending request" delay="190.5024ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-270000"
	I0903 23:08:29.821140    1228 request.go:683] "Waited before sending request" delay="194.3559ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000"
	I0903 23:08:29.827173    1228 pod_ready.go:94] pod "kube-scheduler-ha-270000" is "Ready"
	I0903 23:08:29.827225    1228 pod_ready.go:86] duration metric: took 396.1718ms for pod "kube-scheduler-ha-270000" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:29.827225    1228 pod_ready.go:83] waiting for pod "kube-scheduler-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:30.021745    1228 request.go:683] "Waited before sending request" delay="194.5167ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-270000-m02"
	I0903 23:08:30.220851    1228 request.go:683] "Waited before sending request" delay="191.5353ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m02"
	I0903 23:08:30.226963    1228 pod_ready.go:94] pod "kube-scheduler-ha-270000-m02" is "Ready"
	I0903 23:08:30.226963    1228 pod_ready.go:86] duration metric: took 399.7316ms for pod "kube-scheduler-ha-270000-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:30.226963    1228 pod_ready.go:83] waiting for pod "kube-scheduler-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:30.420761    1228 request.go:683] "Waited before sending request" delay="193.6898ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-270000-m03"
	I0903 23:08:30.621262    1228 request.go:683] "Waited before sending request" delay="194.3443ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.127.254:8443/api/v1/nodes/ha-270000-m03"
	I0903 23:08:30.631859    1228 pod_ready.go:94] pod "kube-scheduler-ha-270000-m03" is "Ready"
	I0903 23:08:30.631951    1228 pod_ready.go:86] duration metric: took 404.8762ms for pod "kube-scheduler-ha-270000-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:30.631951    1228 pod_ready.go:40] duration metric: took 6.0140833s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:08:30.774051    1228 start.go:617] kubectl: 1.34.0, cluster: 1.34.0 (minor skew: 0)
	I0903 23:08:30.778556    1228 out.go:179] * Done! kubectl is now configured to use "ha-270000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.340853786Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count b815c8341521d784335e2ba21604b2414c9e730e154cf870398d1b8c474f33aa], retrying...."
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.438738576Z" level=info msg="Loading containers: done."
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.460709775Z" level=info msg="Docker daemon" commit=e77ff99 containerd-snapshotter=false storage-driver=overlay2 version=28.3.2
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.460777676Z" level=info msg="Initializing buildkit"
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.495790694Z" level=info msg="Completed buildkit initialization"
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.511119834Z" level=info msg="Daemon has completed initialization"
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.511168434Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.511361936Z" level=info msg="API listen on /run/docker.sock"
	Sep 03 22:59:23 ha-270000 dockerd[2057]: time="2025-09-03T22:59:23.511425737Z" level=info msg="API listen on [::]:2376"
	Sep 03 22:59:23 ha-270000 systemd[1]: Started Docker Application Container Engine.
	Sep 03 22:59:34 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90329fcf36cd0f912716cea1751c86422190bed362ff1c040970598366a259c2/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:34 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9a3da56bb72c938aa1f38c595aee13d2464f856c5e46cdf558aec6d1a862db23/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:34 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6bf910dadd391c2be6ded43b28e91e0547975fb132fdd33e1d7c9b17b2d84a3b/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:34 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a4dd9e65d7d273637dc3367e6beeea47b9e1c094a4cc81fff90d528c28feba04/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:34 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0b16264b4dd567bea1f101a9a6fdd98d72e0fe7e4e47a9de8397547ea6cc3912/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:37 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:37Z" level=info msg="Stop pulling image ghcr.io/kube-vip/kube-vip:v1.0.0: Status: Downloaded newer image for ghcr.io/kube-vip/kube-vip:v1.0.0"
	Sep 03 22:59:48 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 03 22:59:50 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/513be39f65dea0bdfdf72a9db2617cb17253abdd890e152086c2e07560f9850b/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:50 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/14155ffe05146e4c150dfdd56e7ccbd470fdd08c24940763f0ce633cc7d9ca72/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 22:59:57 ha-270000 cri-dockerd[1921]: time="2025-09-03T22:59:57Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 03 23:00:12 ha-270000 cri-dockerd[1921]: time="2025-09-03T23:00:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1a846b8a99e7701086644b9e4b501865d72ddc25ed73eb3c13ec9c4c8f0a426/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 23:00:12 ha-270000 cri-dockerd[1921]: time="2025-09-03T23:00:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5ab48725316b35b7cbfbe34ed1b7ef8ff490e2c9aab4bc6046ac062d6cd592c/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 23:00:12 ha-270000 cri-dockerd[1921]: time="2025-09-03T23:00:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5667c542f41f834bccd4227fef98bf0c3102aa8be800e12cbca9ed319d69cd70/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 23:09:09 ha-270000 cri-dockerd[1921]: time="2025-09-03T23:09:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f54f411a35779176c9319b737fbe697ae2872af4162be6251aa352a81a0471d0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 03 23:09:11 ha-270000 cri-dockerd[1921]: time="2025-09-03T23:09:11Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b2d73adb2f15       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   f54f411a35779       busybox-7b57f96db7-lxhhz
	4ea445ef36026       52546a367cc9e                                                                                         26 minutes ago      Running             coredns                   0                   5667c542f41f8       coredns-66bc5c9577-cnk8d
	39d49eaefc29e       52546a367cc9e                                                                                         26 minutes ago      Running             coredns                   0                   c1a846b8a99e7       coredns-66bc5c9577-58qw9
	afc6e3d43fb6c       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   b5ab48725316b       storage-provisioner
	1aed5b11fdcd8       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              27 minutes ago      Running             kindnet-cni               0                   14155ffe05146       kindnet-96trb
	faad83036df83       df0860106674d                                                                                         27 minutes ago      Running             kube-proxy                0                   513be39f65dea       kube-proxy-t96st
	9b02f8b78eee0       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     27 minutes ago      Running             kube-vip                  0                   90329fcf36cd0       kube-vip-ha-270000
	5227167cf7b2c       46169d968e920                                                                                         27 minutes ago      Running             kube-scheduler            0                   0b16264b4dd56       kube-scheduler-ha-270000
	9f44f2bbeacca       5f1f5298c888d                                                                                         27 minutes ago      Running             etcd                      0                   a4dd9e65d7d27       etcd-ha-270000
	7f593816c5b60       a0af72f2ec6d6                                                                                         27 minutes ago      Running             kube-controller-manager   0                   6bf910dadd391       kube-controller-manager-ha-270000
	33fa1cad16779       90550c43ad2bc                                                                                         27 minutes ago      Running             kube-apiserver            0                   9a3da56bb72c9       kube-apiserver-ha-270000
	
	
	==> coredns [39d49eaefc29] <==
	[INFO] 10.244.2.2:54702 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000196002s
	[INFO] 10.244.2.2:35992 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000194403s
	[INFO] 10.244.1.2:50567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154002s
	[INFO] 10.244.1.2:54999 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000085201s
	[INFO] 10.244.1.2:53722 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174003s
	[INFO] 10.244.1.2:36245 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112502s
	[INFO] 10.244.0.4:56323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000659108s
	[INFO] 10.244.0.4:50146 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016604612s
	[INFO] 10.244.0.4:43817 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000303604s
	[INFO] 10.244.0.4:46846 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000126002s
	[INFO] 10.244.0.4:44316 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000307304s
	[INFO] 10.244.0.4:52546 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000210102s
	[INFO] 10.244.0.4:36032 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147302s
	[INFO] 10.244.2.2:34527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252504s
	[INFO] 10.244.1.2:47369 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169502s
	[INFO] 10.244.1.2:60919 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000339705s
	[INFO] 10.244.1.2:52619 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166602s
	[INFO] 10.244.1.2:57454 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064101s
	[INFO] 10.244.0.4:34556 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000241403s
	[INFO] 10.244.2.2:33501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000297104s
	[INFO] 10.244.2.2:49833 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100801s
	[INFO] 10.244.2.2:45636 - 5 "PTR IN 1.112.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000209903s
	[INFO] 10.244.1.2:53110 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000322804s
	[INFO] 10.244.1.2:40341 - 5 "PTR IN 1.112.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000184102s
	[INFO] 10.244.0.4:47011 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080701s
	
	
	==> coredns [4ea445ef3602] <==
	[INFO] 10.244.2.2:57620 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd 124 0.181580921s
	[INFO] 10.244.1.2:33948 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003741948s
	[INFO] 10.244.1.2:33516 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 89 0.001293416s
	[INFO] 10.244.0.4:39698 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 126 0.000207603s
	[INFO] 10.244.0.4:53955 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 124 0.000087401s
	[INFO] 10.244.2.2:53515 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032336012s
	[INFO] 10.244.2.2:49443 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205803s
	[INFO] 10.244.2.2:43376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159102s
	[INFO] 10.244.1.2:42079 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000429105s
	[INFO] 10.244.1.2:33994 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074101s
	[INFO] 10.244.1.2:53427 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000296504s
	[INFO] 10.244.1.2:58071 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000289403s
	[INFO] 10.244.0.4:41062 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192502s
	[INFO] 10.244.2.2:60168 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000249103s
	[INFO] 10.244.2.2:55216 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000240103s
	[INFO] 10.244.2.2:43311 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080101s
	[INFO] 10.244.0.4:39601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000326604s
	[INFO] 10.244.0.4:50681 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139702s
	[INFO] 10.244.0.4:41448 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121302s
	[INFO] 10.244.2.2:44725 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117502s
	[INFO] 10.244.1.2:45944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118701s
	[INFO] 10.244.1.2:44094 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000950212s
	[INFO] 10.244.0.4:46361 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252203s
	[INFO] 10.244.0.4:48916 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094101s
	[INFO] 10.244.0.4:45915 - 5 "PTR IN 1.112.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122901s
	
	
	==> describe nodes <==
	Name:               ha-270000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-270000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=ha-270000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_03T22_59_45_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Sep 2025 22:59:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-270000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Sep 2025 23:26:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Sep 2025 23:26:05 +0000   Wed, 03 Sep 2025 22:59:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Sep 2025 23:26:05 +0000   Wed, 03 Sep 2025 22:59:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Sep 2025 23:26:05 +0000   Wed, 03 Sep 2025 22:59:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Sep 2025 23:26:05 +0000   Wed, 03 Sep 2025 23:00:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.116.52
	  Hostname:    ha-270000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac512743579d4a1595cd8eeb12593efb
	  System UUID:                19a5aee7-0b11-eb4e-892b-911233248f7e
	  Boot ID:                    5dec2aa3-6ec6-413a-8333-c7300633f796
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lxhhz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-66bc5c9577-58qw9             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     27m
	  kube-system                 coredns-66bc5c9577-cnk8d             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     27m
	  kube-system                 etcd-ha-270000                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         27m
	  kube-system                 kindnet-96trb                        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      27m
	  kube-system                 kube-apiserver-ha-270000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-270000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-t96st                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-270000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-270000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (9%)  390Mi (13%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-270000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-270000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-270000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-270000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-270000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-270000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m                node-controller  Node ha-270000 event: Registered Node ha-270000 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-270000 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node ha-270000 event: Registered Node ha-270000 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-270000 event: Registered Node ha-270000 in Controller
	
	
	Name:               ha-270000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-270000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=ha-270000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_03T23_03_48_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Sep 2025 23:03:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-270000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Sep 2025 23:26:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Sep 2025 23:25:24 +0000   Wed, 03 Sep 2025 23:03:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Sep 2025 23:25:24 +0000   Wed, 03 Sep 2025 23:03:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Sep 2025 23:25:24 +0000   Wed, 03 Sep 2025 23:03:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Sep 2025 23:25:24 +0000   Wed, 03 Sep 2025 23:04:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.120.53
	  Hostname:    ha-270000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 024b94a46799424c84c7081b9a292387
	  System UUID:                31707a6e-1c2d-984a-a6d3-0674b15d2706
	  Boot ID:                    31d26a98-3b58-44f3-a168-e2d96656d476
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c6z29                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-270000-m02                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         23m
	  kube-system                 kindnet-vsgwr                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      23m
	  kube-system                 kube-apiserver-ha-270000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-270000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-qkts6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-270000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-270000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        23m   kube-proxy       
	  Normal  RegisteredNode  23m   node-controller  Node ha-270000-m02 event: Registered Node ha-270000-m02 in Controller
	  Normal  RegisteredNode  23m   node-controller  Node ha-270000-m02 event: Registered Node ha-270000-m02 in Controller
	  Normal  RegisteredNode  19m   node-controller  Node ha-270000-m02 event: Registered Node ha-270000-m02 in Controller
	
	
	Name:               ha-270000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-270000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=ha-270000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_03T23_07_54_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Sep 2025 23:07:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-270000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Sep 2025 23:26:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Sep 2025 23:26:14 +0000   Wed, 03 Sep 2025 23:07:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Sep 2025 23:26:14 +0000   Wed, 03 Sep 2025 23:07:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Sep 2025 23:26:14 +0000   Wed, 03 Sep 2025 23:07:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Sep 2025 23:26:14 +0000   Wed, 03 Sep 2025 23:08:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.124.104
	  Hostname:    ha-270000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c41329947b84b6ba0c0c88ad46c0ca9
	  System UUID:                a8bd4d02-c4f0-2149-98e7-f240fc6aa90c
	  Boot ID:                    e76f92a8-1f48-48bb-8d14-b86184e2d0d1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5cfq2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-270000-m03                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         19m
	  kube-system                 kindnet-wqmlt                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      19m
	  kube-system                 kube-apiserver-ha-270000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-270000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-cb8z2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-270000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-270000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        18m   kube-proxy       
	  Normal  RegisteredNode  19m   node-controller  Node ha-270000-m03 event: Registered Node ha-270000-m03 in Controller
	  Normal  RegisteredNode  19m   node-controller  Node ha-270000-m03 event: Registered Node ha-270000-m03 in Controller
	  Normal  RegisteredNode  19m   node-controller  Node ha-270000-m03 event: Registered Node ha-270000-m03 in Controller
	
	
	Name:               ha-270000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-270000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=ha-270000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_03T23_13_25_0700
	                    minikube.k8s.io/version=v1.36.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Sep 2025 23:13:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-270000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Sep 2025 23:26:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Sep 2025 23:22:17 +0000   Wed, 03 Sep 2025 23:13:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Sep 2025 23:22:17 +0000   Wed, 03 Sep 2025 23:13:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Sep 2025 23:22:17 +0000   Wed, 03 Sep 2025 23:13:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Sep 2025 23:22:17 +0000   Wed, 03 Sep 2025 23:14:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.114.77
	  Hostname:    ha-270000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 6fa9e521079940f0acda1d27836a82b9
	  System UUID:                a1f41e22-2cbe-8049-a795-c5056b5e7552
	  Boot ID:                    d550a360-c79a-4f98-b9ff-5c425012f7e5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7mwnc       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      13m
	  kube-system                 kube-proxy-7n7dn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  RegisteredNode           13m                node-controller  Node ha-270000-m04 event: Registered Node ha-270000-m04 in Controller
	  Normal  NodeHasSufficientMemory  13m (x3 over 13m)  kubelet          Node ha-270000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x3 over 13m)  kubelet          Node ha-270000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x3 over 13m)  kubelet          Node ha-270000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-270000-m04 event: Registered Node ha-270000-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-270000-m04 event: Registered Node ha-270000-m04 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-270000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +0.002271] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.000009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.002272] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +0.665501] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.000056] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002869] (rpcbind)[114]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.622383] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 3 22:59] kauditd_printk_skb: 96 callbacks suppressed
	[  +0.187594] kauditd_printk_skb: 396 callbacks suppressed
	[  +0.185708] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.149384] kauditd_printk_skb: 193 callbacks suppressed
	[  +6.035020] kauditd_printk_skb: 174 callbacks suppressed
	[  +0.209097] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.886303] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.389456] kauditd_printk_skb: 107 callbacks suppressed
	[Sep 3 23:00] kauditd_printk_skb: 17 callbacks suppressed
	[Sep 3 23:03] kauditd_printk_skb: 92 callbacks suppressed
	[Sep 3 23:09] hrtimer: interrupt took 1369018 ns
	[Sep 3 23:26] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [9f44f2bbeacc] <==
	{"level":"warn","ts":"2025-09-03T23:27:01.032829Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.058996Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.065136Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.075495Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.085528Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.092948Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.105193Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.117459Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.124353Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.128404Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.135931Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.146954Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.156994Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.159011Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.163060Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.168002Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.174368Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.184172Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.193680Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.197766Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.202662Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.207549Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.219777Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.230233Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-03T23:27:01.259773Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"234c625b0f0adc4e","from":"234c625b0f0adc4e","remote-peer-id":"5ec5d9f85793fb82","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:27:01 up 29 min,  0 users,  load average: 0.71, 0.54, 0.49
	Linux ha-270000 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [1aed5b11fdcd] <==
	I0903 23:26:29.223359       1 main.go:324] Node ha-270000-m04 has CIDR [10.244.3.0/24] 
	I0903 23:26:39.222725       1 main.go:297] Handling node with IPs: map[172.25.116.52:{}]
	I0903 23:26:39.222797       1 main.go:301] handling current node
	I0903 23:26:39.222818       1 main.go:297] Handling node with IPs: map[172.25.120.53:{}]
	I0903 23:26:39.222825       1 main.go:324] Node ha-270000-m02 has CIDR [10.244.1.0/24] 
	I0903 23:26:39.223126       1 main.go:297] Handling node with IPs: map[172.25.124.104:{}]
	I0903 23:26:39.223139       1 main.go:324] Node ha-270000-m03 has CIDR [10.244.2.0/24] 
	I0903 23:26:39.223411       1 main.go:297] Handling node with IPs: map[172.25.114.77:{}]
	I0903 23:26:39.223423       1 main.go:324] Node ha-270000-m04 has CIDR [10.244.3.0/24] 
	I0903 23:26:49.227650       1 main.go:297] Handling node with IPs: map[172.25.116.52:{}]
	I0903 23:26:49.227748       1 main.go:301] handling current node
	I0903 23:26:49.227767       1 main.go:297] Handling node with IPs: map[172.25.120.53:{}]
	I0903 23:26:49.227773       1 main.go:324] Node ha-270000-m02 has CIDR [10.244.1.0/24] 
	I0903 23:26:49.228009       1 main.go:297] Handling node with IPs: map[172.25.124.104:{}]
	I0903 23:26:49.228019       1 main.go:324] Node ha-270000-m03 has CIDR [10.244.2.0/24] 
	I0903 23:26:49.228109       1 main.go:297] Handling node with IPs: map[172.25.114.77:{}]
	I0903 23:26:49.228116       1 main.go:324] Node ha-270000-m04 has CIDR [10.244.3.0/24] 
	I0903 23:26:59.220107       1 main.go:297] Handling node with IPs: map[172.25.120.53:{}]
	I0903 23:26:59.220140       1 main.go:324] Node ha-270000-m02 has CIDR [10.244.1.0/24] 
	I0903 23:26:59.220374       1 main.go:297] Handling node with IPs: map[172.25.124.104:{}]
	I0903 23:26:59.220382       1 main.go:324] Node ha-270000-m03 has CIDR [10.244.2.0/24] 
	I0903 23:26:59.220532       1 main.go:297] Handling node with IPs: map[172.25.114.77:{}]
	I0903 23:26:59.220538       1 main.go:324] Node ha-270000-m04 has CIDR [10.244.3.0/24] 
	I0903 23:26:59.220731       1 main.go:297] Handling node with IPs: map[172.25.116.52:{}]
	I0903 23:26:59.220741       1 main.go:301] handling current node
	
	
	==> kube-apiserver [33fa1cad1677] <==
	E0903 23:13:42.842153       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 7.4µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0903 23:13:42.842439       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="89.901µs" method="POST" path="/apis/events.k8s.io/v1/namespaces/default/events" result=null
	I0903 23:13:56.733642       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:14:07.229909       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:15:13.878181       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:15:29.232316       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:16:35.500251       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:16:54.890451       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:17:36.097545       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:17:56.141560       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:19:01.385854       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:19:18.758631       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:19:40.273948       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0903 23:20:09.537882       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:20:44.805713       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:21:25.336266       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:22:10.679247       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:22:37.532503       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:23:37.507033       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:24:04.292753       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:25:00.059558       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:25:09.480147       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:26:16.674253       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:26:17.660543       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0903 23:26:43.096253       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.116.52 172.25.124.104]
	
	
	==> kube-controller-manager [7f593816c5b6] <==
	I0903 22:59:48.338405       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0903 22:59:48.347114       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0903 22:59:48.347957       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0903 22:59:48.348231       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0903 22:59:48.350413       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0903 22:59:48.350547       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-270000" podCIDRs=["10.244.0.0/24"]
	I0903 22:59:48.350648       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0903 22:59:48.350759       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0903 22:59:48.353465       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0903 22:59:48.354706       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0903 22:59:48.357072       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0903 22:59:48.361992       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0903 23:00:13.305555       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0903 23:03:47.037302       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-270000-m02\" does not exist"
	I0903 23:03:47.108103       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-270000-m02" podCIDRs=["10.244.1.0/24"]
	I0903 23:03:48.353030       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-270000-m02"
	I0903 23:07:53.159425       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-270000-m03\" does not exist"
	I0903 23:07:53.231028       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-270000-m03" podCIDRs=["10.244.2.0/24"]
	I0903 23:07:53.425938       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-270000-m03"
	E0903 23:13:25.042783       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-659kk failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-659kk\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0903 23:13:25.101952       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-659kk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-659kk\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0903 23:13:25.239273       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-270000-m04\" does not exist"
	I0903 23:13:25.350825       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-270000-m04" podCIDRs=["10.244.3.0/24"]
	I0903 23:13:28.550907       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-270000-m04"
	I0903 23:14:18.270078       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-270000-m04"
	
	
	==> kube-proxy [faad83036df8] <==
	I0903 22:59:50.779113       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0903 22:59:50.880419       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0903 22:59:50.880456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.25.116.52"]
	E0903 22:59:50.880565       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0903 22:59:50.945516       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0903 22:59:50.945819       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0903 22:59:50.945901       1 server_linux.go:132] "Using iptables Proxier"
	I0903 22:59:50.968216       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0903 22:59:50.968801       1 server.go:527] "Version info" version="v1.34.0"
	I0903 22:59:50.968824       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0903 22:59:50.971984       1 config.go:200] "Starting service config controller"
	I0903 22:59:50.972002       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0903 22:59:50.972020       1 config.go:106] "Starting endpoint slice config controller"
	I0903 22:59:50.972026       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0903 22:59:50.972040       1 config.go:403] "Starting serviceCIDR config controller"
	I0903 22:59:50.972046       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0903 22:59:50.972948       1 config.go:309] "Starting node config controller"
	I0903 22:59:50.972959       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0903 22:59:50.972966       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0903 22:59:51.072869       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0903 22:59:51.073049       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0903 22:59:51.073138       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5227167cf7b2] <==
	I0903 22:59:43.605124       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0903 23:07:53.694484       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dlw8c\": pod kube-proxy-dlw8c is already assigned to node \"ha-270000-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dlw8c" node="ha-270000-m03"
	E0903 23:07:53.694640       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4d595fbf-2bee-4651-a4fa-7ce87d747f6d(kube-system/kube-proxy-dlw8c) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-dlw8c"
	E0903 23:07:53.694680       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dlw8c\": pod kube-proxy-dlw8c is already assigned to node \"ha-270000-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-dlw8c"
	I0903 23:07:53.696086       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dlw8c" node="ha-270000-m03"
	E0903 23:13:25.461348       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l5th8\": pod kube-proxy-l5th8 is already assigned to node \"ha-270000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l5th8" node="ha-270000-m04"
	E0903 23:13:25.461520       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l5th8\": pod kube-proxy-l5th8 is already assigned to node \"ha-270000-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-l5th8"
	E0903 23:13:25.475358       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sdzrh\": pod kube-proxy-sdzrh is already assigned to node \"ha-270000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sdzrh" node="ha-270000-m04"
	E0903 23:13:25.475515       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b20b7bdf-2737-4da8-a05d-f6c6fea682ee(kube-system/kube-proxy-sdzrh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-sdzrh"
	E0903 23:13:25.475537       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sdzrh\": pod kube-proxy-sdzrh is already assigned to node \"ha-270000-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-sdzrh"
	E0903 23:13:25.475860       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ntbtp\": pod kindnet-ntbtp is already assigned to node \"ha-270000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ntbtp" node="ha-270000-m04"
	E0903 23:13:25.476229       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b515af16-cd44-4131-8e35-7ffabe20686c(kube-system/kindnet-ntbtp) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-ntbtp"
	E0903 23:13:25.477456       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ntbtp\": pod kindnet-ntbtp is already assigned to node \"ha-270000-m04\"" logger="UnhandledError" pod="kube-system/kindnet-ntbtp"
	E0903 23:13:25.478190       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-w52fq\": pod kindnet-w52fq is already assigned to node \"ha-270000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-w52fq" node="ha-270000-m04"
	E0903 23:13:25.478248       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod fb2ecc49-9819-4580-81c2-9605d57b5f7c(kube-system/kindnet-w52fq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-w52fq"
	I0903 23:13:25.478273       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sdzrh" node="ha-270000-m04"
	I0903 23:13:25.477502       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ntbtp" node="ha-270000-m04"
	E0903 23:13:25.480738       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-w52fq\": pod kindnet-w52fq is already assigned to node \"ha-270000-m04\"" logger="UnhandledError" pod="kube-system/kindnet-w52fq"
	I0903 23:13:25.480854       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-w52fq" node="ha-270000-m04"
	E0903 23:13:30.950394       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vjsrf\": pod kube-proxy-vjsrf is already assigned to node \"ha-270000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vjsrf" node="ha-270000-m04"
	E0903 23:13:30.950476       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vjsrf\": pod kube-proxy-vjsrf is already assigned to node \"ha-270000-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-vjsrf"
	E0903 23:13:30.957328       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tp847\": pod kube-proxy-tp847 is already assigned to node \"ha-270000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tp847" node="ha-270000-m04"
	E0903 23:13:30.959022       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 0bdb32e2-1b63-481e-b44d-7051370890eb(kube-system/kube-proxy-tp847) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-tp847"
	E0903 23:13:30.961752       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tp847\": pod kube-proxy-tp847 is already assigned to node \"ha-270000-m04\"" logger="UnhandledError" pod="kube-system/kube-proxy-tp847"
	I0903 23:13:30.963268       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tp847" node="ha-270000-m04"
	
	
	==> kubelet <==
	Sep 03 22:59:48 ha-270000 kubelet[3184]: I0903 22:59:48.433551    3184 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 03 22:59:48 ha-270000 kubelet[3184]: I0903 22:59:48.434480    3184 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.257639    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqkm\" (UniqueName: \"kubernetes.io/projected/f609fa93-da46-46a5-ba36-84c291da86a5-kube-api-access-ngqkm\") pod \"kube-proxy-t96st\" (UID: \"f609fa93-da46-46a5-ba36-84c291da86a5\") " pod="kube-system/kube-proxy-t96st"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.258332    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f609fa93-da46-46a5-ba36-84c291da86a5-kube-proxy\") pod \"kube-proxy-t96st\" (UID: \"f609fa93-da46-46a5-ba36-84c291da86a5\") " pod="kube-system/kube-proxy-t96st"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.259426    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f609fa93-da46-46a5-ba36-84c291da86a5-xtables-lock\") pod \"kube-proxy-t96st\" (UID: \"f609fa93-da46-46a5-ba36-84c291da86a5\") " pod="kube-system/kube-proxy-t96st"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.259518    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f609fa93-da46-46a5-ba36-84c291da86a5-lib-modules\") pod \"kube-proxy-t96st\" (UID: \"f609fa93-da46-46a5-ba36-84c291da86a5\") " pod="kube-system/kube-proxy-t96st"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.360270    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32ea1443-99f0-4e56-99cb-d1ce43dbcb2f-xtables-lock\") pod \"kindnet-96trb\" (UID: \"32ea1443-99f0-4e56-99cb-d1ce43dbcb2f\") " pod="kube-system/kindnet-96trb"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.360999    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/32ea1443-99f0-4e56-99cb-d1ce43dbcb2f-cni-cfg\") pod \"kindnet-96trb\" (UID: \"32ea1443-99f0-4e56-99cb-d1ce43dbcb2f\") " pod="kube-system/kindnet-96trb"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.361173    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqpzw\" (UniqueName: \"kubernetes.io/projected/32ea1443-99f0-4e56-99cb-d1ce43dbcb2f-kube-api-access-gqpzw\") pod \"kindnet-96trb\" (UID: \"32ea1443-99f0-4e56-99cb-d1ce43dbcb2f\") " pod="kube-system/kindnet-96trb"
	Sep 03 22:59:49 ha-270000 kubelet[3184]: I0903 22:59:49.361967    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32ea1443-99f0-4e56-99cb-d1ce43dbcb2f-lib-modules\") pod \"kindnet-96trb\" (UID: \"32ea1443-99f0-4e56-99cb-d1ce43dbcb2f\") " pod="kube-system/kindnet-96trb"
	Sep 03 22:59:50 ha-270000 kubelet[3184]: I0903 22:59:50.617131    3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14155ffe05146e4c150dfdd56e7ccbd470fdd08c24940763f0ce633cc7d9ca72"
	Sep 03 22:59:53 ha-270000 kubelet[3184]: I0903 22:59:53.290148    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t96st" podStartSLOduration=4.290132028 podStartE2EDuration="4.290132028s" podCreationTimestamp="2025-09-03 22:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 22:59:51.732225535 +0000 UTC m=+7.582181776" watchObservedRunningTime="2025-09-03 22:59:53.290132028 +0000 UTC m=+9.140088169"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.406228    3184 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.478149    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-96trb" podStartSLOduration=15.921778306 podStartE2EDuration="22.478116072s" podCreationTimestamp="2025-09-03 22:59:49 +0000 UTC" firstStartedPulling="2025-09-03 22:59:50.621148194 +0000 UTC m=+6.471104335" lastFinishedPulling="2025-09-03 22:59:57.17748596 +0000 UTC m=+13.027442101" observedRunningTime="2025-09-03 22:59:58.801321448 +0000 UTC m=+14.651277689" watchObservedRunningTime="2025-09-03 23:00:11.478116072 +0000 UTC m=+27.328072313"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.612413    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4c3bec4-9c47-404e-98ff-21e0aee82931-config-volume\") pod \"coredns-66bc5c9577-58qw9\" (UID: \"e4c3bec4-9c47-404e-98ff-21e0aee82931\") " pod="kube-system/coredns-66bc5c9577-58qw9"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.612826    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff84p\" (UniqueName: \"kubernetes.io/projected/e4c3bec4-9c47-404e-98ff-21e0aee82931-kube-api-access-ff84p\") pod \"coredns-66bc5c9577-58qw9\" (UID: \"e4c3bec4-9c47-404e-98ff-21e0aee82931\") " pod="kube-system/coredns-66bc5c9577-58qw9"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.612896    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7643327e-078c-45c9-9a32-cdf3b7a72986-tmp\") pod \"storage-provisioner\" (UID: \"7643327e-078c-45c9-9a32-cdf3b7a72986\") " pod="kube-system/storage-provisioner"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.612935    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtx9\" (UniqueName: \"kubernetes.io/projected/7643327e-078c-45c9-9a32-cdf3b7a72986-kube-api-access-2mtx9\") pod \"storage-provisioner\" (UID: \"7643327e-078c-45c9-9a32-cdf3b7a72986\") " pod="kube-system/storage-provisioner"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.713810    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20226b19-1d13-4057-88c1-709997f24868-config-volume\") pod \"coredns-66bc5c9577-cnk8d\" (UID: \"20226b19-1d13-4057-88c1-709997f24868\") " pod="kube-system/coredns-66bc5c9577-cnk8d"
	Sep 03 23:00:11 ha-270000 kubelet[3184]: I0903 23:00:11.714105    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrjk6\" (UniqueName: \"kubernetes.io/projected/20226b19-1d13-4057-88c1-709997f24868-kube-api-access-lrjk6\") pod \"coredns-66bc5c9577-cnk8d\" (UID: \"20226b19-1d13-4057-88c1-709997f24868\") " pod="kube-system/coredns-66bc5c9577-cnk8d"
	Sep 03 23:00:14 ha-270000 kubelet[3184]: I0903 23:00:14.274197    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cnk8d" podStartSLOduration=25.274173958 podStartE2EDuration="25.274173958s" podCreationTimestamp="2025-09-03 22:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:00:14.230204239 +0000 UTC m=+30.080160480" watchObservedRunningTime="2025-09-03 23:00:14.274173958 +0000 UTC m=+30.124130099"
	Sep 03 23:00:14 ha-270000 kubelet[3184]: I0903 23:00:14.316561    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.316546059 podStartE2EDuration="16.316546059s" podCreationTimestamp="2025-09-03 22:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:00:14.315644548 +0000 UTC m=+30.165600789" watchObservedRunningTime="2025-09-03 23:00:14.316546059 +0000 UTC m=+30.166502200"
	Sep 03 23:00:14 ha-270000 kubelet[3184]: I0903 23:00:14.385082    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-58qw9" podStartSLOduration=25.385064668 podStartE2EDuration="25.385064668s" podCreationTimestamp="2025-09-03 22:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:00:14.382147734 +0000 UTC m=+30.232103975" watchObservedRunningTime="2025-09-03 23:00:14.385064668 +0000 UTC m=+30.235020909"
	Sep 03 23:09:08 ha-270000 kubelet[3184]: I0903 23:09:08.377355    3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbtbn\" (UniqueName: \"kubernetes.io/projected/04bf5fc3-6c7c-4a98-b313-5409650649e3-kube-api-access-gbtbn\") pod \"busybox-7b57f96db7-lxhhz\" (UID: \"04bf5fc3-6c7c-4a98-b313-5409650649e3\") " pod="default/busybox-7b57f96db7-lxhhz"
	Sep 03 23:09:13 ha-270000 kubelet[3184]: I0903 23:09:13.347333    3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-lxhhz" podStartSLOduration=3.108610599 podStartE2EDuration="5.347316014s" podCreationTimestamp="2025-09-03 23:09:08 +0000 UTC" firstStartedPulling="2025-09-03 23:09:09.659404437 +0000 UTC m=+565.509360578" lastFinishedPulling="2025-09-03 23:09:11.898109752 +0000 UTC m=+567.748065993" observedRunningTime="2025-09-03 23:09:13.346856608 +0000 UTC m=+569.196812849" watchObservedRunningTime="2025-09-03 23:09:13.347316014 +0000 UTC m=+569.197272255"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-270000 -n ha-270000
helpers_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-270000 -n ha-270000: (12.0302693s)
helpers_test.go:269: (dbg) Run:  kubectl --context ha-270000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (50.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (55.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- sh -c "ping -c 1 172.25.112.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- sh -c "ping -c 1 172.25.112.1": exit status 1 (10.503514s)

                                                
                                                
-- stdout --
	PING 172.25.112.1 (172.25.112.1): 56 data bytes
	
	--- 172.25.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.25.112.1) from pod (busybox-7b57f96db7-bj95n): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- sh -c "ping -c 1 172.25.112.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- sh -c "ping -c 1 172.25.112.1": exit status 1 (10.4755493s)

                                                
                                                
-- stdout --
	PING 172.25.112.1 (172.25.112.1): 56 data bytes
	
	--- 172.25.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.25.112.1) from pod (busybox-7b57f96db7-vpdc8): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-477700 -n multinode-477700
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-477700 -n multinode-477700: (11.8642361s)
helpers_test.go:252: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 logs -n 25: (8.1499823s)
helpers_test.go:260: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                         ARGS                                                                                                         │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ mount-start-2-933900 ssh -- ls /minikube-host                                                                                                                                                                        │ mount-start-2-933900 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:53 UTC │ 03 Sep 25 23:53 UTC │
	│ delete  │ -p mount-start-1-933900 --alsologtostderr -v=5                                                                                                                                                                       │ mount-start-1-933900 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:53 UTC │ 03 Sep 25 23:53 UTC │
	│ ssh     │ mount-start-2-933900 ssh -- ls /minikube-host                                                                                                                                                                        │ mount-start-2-933900 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:53 UTC │ 03 Sep 25 23:54 UTC │
	│ stop    │ -p mount-start-2-933900                                                                                                                                                                                              │ mount-start-2-933900 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:54 UTC │ 03 Sep 25 23:54 UTC │
	│ start   │ -p mount-start-2-933900                                                                                                                                                                                              │ mount-start-2-933900 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:54 UTC │ 03 Sep 25 23:56 UTC │
	│ mount   │ C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMountStartserial1258064499\001:/minikube-host --profile mount-start-2-933900 --v 0 --9p-version 9p2000.L --gid 0 --ip  --msize 6543 --port 46465 --type 9p --uid 0 │ mount-start-2-933900 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:56 UTC │                     │
	│ ssh     │ mount-start-2-933900 ssh -- ls /minikube-host                                                                                                                                                                        │ mount-start-2-933900 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:56 UTC │ 03 Sep 25 23:56 UTC │
	│ delete  │ -p mount-start-2-933900                                                                                                                                                                                              │ mount-start-2-933900 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:56 UTC │ 03 Sep 25 23:57 UTC │
	│ delete  │ -p mount-start-1-933900                                                                                                                                                                                              │ mount-start-1-933900 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:57 UTC │ 03 Sep 25 23:57 UTC │
	│ start   │ -p multinode-477700 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=hyperv                                                                                                                       │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 23:57 UTC │ 04 Sep 25 00:03 UTC │
	│ kubectl │ -p multinode-477700 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml                                                                                                                                    │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- rollout status deployment/busybox                                                                                                                                                             │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- get pods -o jsonpath='{.items[*].status.podIP}'                                                                                                                                               │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                                                                                                              │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- nslookup kubernetes.io                                                                                                                                       │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- nslookup kubernetes.io                                                                                                                                       │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- nslookup kubernetes.default                                                                                                                                  │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- nslookup kubernetes.default                                                                                                                                  │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- nslookup kubernetes.default.svc.cluster.local                                                                                                                │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- nslookup kubernetes.default.svc.cluster.local                                                                                                                │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- get pods -o jsonpath='{.items[*].metadata.name}'                                                                                                                                              │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3                                                                                          │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- sh -c ping -c 1 172.25.112.1                                                                                                                                 │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │                     │
	│ kubectl │ -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3                                                                                          │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │ 04 Sep 25 00:04 UTC │
	│ kubectl │ -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- sh -c ping -c 1 172.25.112.1                                                                                                                                 │ multinode-477700     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:04 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:57:02
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:57:02.746522    4292 out.go:360] Setting OutFile to fd 1788 ...
	I0903 23:57:02.824810    4292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:57:02.824810    4292 out.go:374] Setting ErrFile to fd 1832...
	I0903 23:57:02.824810    4292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:57:02.843835    4292 out.go:368] Setting JSON to false
	I0903 23:57:02.845809    4292 start.go:130] hostinfo: {"hostname":"minikube6","uptime":26928,"bootTime":1756916894,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0903 23:57:02.846812    4292 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0903 23:57:02.852976    4292 out.go:179] * [multinode-477700] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0903 23:57:02.856912    4292 notify.go:220] Checking for updates...
	I0903 23:57:02.858808    4292 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0903 23:57:02.861847    4292 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:57:02.864804    4292 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0903 23:57:02.867806    4292 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:57:02.870799    4292 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:57:02.875941    4292 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:57:02.876530    4292 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:57:08.120226    4292 out.go:179] * Using the hyperv driver based on user configuration
	I0903 23:57:08.124251    4292 start.go:304] selected driver: hyperv
	I0903 23:57:08.124334    4292 start.go:918] validating driver "hyperv" against <nil>
	I0903 23:57:08.124334    4292 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:57:08.174272    4292 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 23:57:08.176209    4292 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:57:08.176276    4292 cni.go:84] Creating CNI manager for ""
	I0903 23:57:08.176344    4292 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0903 23:57:08.176344    4292 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0903 23:57:08.176344    4292 start.go:348] cluster config:
	{Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:57:08.176344    4292 iso.go:125] acquiring lock: {Name:mk966bde02eeea119c68f0830e579f0a83ec9e11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:57:08.181096    4292 out.go:179] * Starting "multinode-477700" primary control-plane node in "multinode-477700" cluster
	I0903 23:57:08.183309    4292 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 23:57:08.183309    4292 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0903 23:57:08.183309    4292 cache.go:58] Caching tarball of preloaded images
	I0903 23:57:08.183960    4292 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0903 23:57:08.183960    4292 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0903 23:57:08.183960    4292 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0903 23:57:08.183960    4292 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json: {Name:mk5961d9b308cd18a11c237a2cab71d576f98991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:57:08.185601    4292 start.go:360] acquireMachinesLock for multinode-477700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:57:08.185827    4292 start.go:364] duration metric: took 225.8µs to acquireMachinesLock for "multinode-477700"
	I0903 23:57:08.186017    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0903 23:57:08.186017    4292 start.go:125] createHost starting for "" (driver="hyperv")
	I0903 23:57:08.190946    4292 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 23:57:08.191682    4292 start.go:159] libmachine.API.Create for "multinode-477700" (driver="hyperv")
	I0903 23:57:08.191682    4292 client.go:168] LocalClient.Create starting
	I0903 23:57:08.191870    4292 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0903 23:57:08.192501    4292 main.go:141] libmachine: Decoding PEM data...
	I0903 23:57:08.192533    4292 main.go:141] libmachine: Parsing certificate...
	I0903 23:57:08.192715    4292 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0903 23:57:08.192879    4292 main.go:141] libmachine: Decoding PEM data...
	I0903 23:57:08.192879    4292 main.go:141] libmachine: Parsing certificate...
	I0903 23:57:08.192879    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0903 23:57:10.217835    4292 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0903 23:57:10.217835    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:10.217835    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0903 23:57:11.986440    4292 main.go:141] libmachine: [stdout =====>] : False
	
	I0903 23:57:11.987120    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:11.987234    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 23:57:13.497910    4292 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 23:57:13.498863    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:13.499038    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 23:57:16.981365    4292 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 23:57:16.982189    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:16.984911    4292 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 23:57:17.630736    4292 main.go:141] libmachine: Creating SSH key...
	I0903 23:57:17.909013    4292 main.go:141] libmachine: Creating VM...
	I0903 23:57:17.909082    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0903 23:57:20.721140    4292 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0903 23:57:20.721984    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:20.721984    4292 main.go:141] libmachine: Using switch "Default Switch"
	I0903 23:57:20.722112    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0903 23:57:22.491095    4292 main.go:141] libmachine: [stdout =====>] : True
	
	I0903 23:57:22.491095    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:22.491095    4292 main.go:141] libmachine: Creating VHD
	I0903 23:57:22.492194    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0903 23:57:26.177965    4292 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8047116D-2819-418E-A9D8-0F498C6481AD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0903 23:57:26.178048    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:26.178048    4292 main.go:141] libmachine: Writing magic tar header
	I0903 23:57:26.178116    4292 main.go:141] libmachine: Writing SSH key tar header
	I0903 23:57:26.191946    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0903 23:57:29.313418    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:57:29.313530    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:29.313530    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\disk.vhd' -SizeBytes 20000MB
	I0903 23:57:31.778560    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:57:31.778870    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:31.778916    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-477700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0903 23:57:35.376349    4292 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-477700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0903 23:57:35.376349    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:35.377014    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-477700 -DynamicMemoryEnabled $false
	I0903 23:57:37.630107    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:57:37.630107    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:37.630184    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-477700 -Count 2
	I0903 23:57:39.809551    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:57:39.809816    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:39.809884    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-477700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\boot2docker.iso'
	I0903 23:57:42.326162    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:57:42.326162    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:42.326453    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-477700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\disk.vhd'
	I0903 23:57:44.923560    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:57:44.924069    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:44.924069    4292 main.go:141] libmachine: Starting VM...
	I0903 23:57:44.924069    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-477700
	I0903 23:57:47.939020    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:57:47.940008    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:47.940008    4292 main.go:141] libmachine: Waiting for host to start...
	I0903 23:57:47.940008    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:57:50.167732    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:57:50.167928    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:50.168076    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:57:52.645928    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:57:52.646083    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:53.646343    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:57:55.823940    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:57:55.823940    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:55.824618    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:57:58.300776    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:57:58.301254    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:57:59.302386    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:01.431221    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:01.431221    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:01.431221    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:03.926056    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:58:03.926056    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:04.926496    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:07.178394    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:07.178394    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:07.179507    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:09.734412    4292 main.go:141] libmachine: [stdout =====>] : 
	I0903 23:58:09.734412    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:10.734866    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:12.921175    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:12.921175    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:12.921384    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:15.502378    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:58:15.502378    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:15.502654    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:17.629243    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:17.629243    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:17.629243    4292 machine.go:93] provisionDockerMachine start ...
	I0903 23:58:17.629243    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:19.744013    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:19.744013    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:19.744013    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:22.233901    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:58:22.233901    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:22.241369    4292 main.go:141] libmachine: Using SSH client type: native
	I0903 23:58:22.257940    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.126.63 22 <nil> <nil>}
	I0903 23:58:22.258016    4292 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:58:22.390124    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:58:22.390124    4292 buildroot.go:166] provisioning hostname "multinode-477700"
	I0903 23:58:22.390662    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:24.504264    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:24.504264    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:24.505032    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:27.089226    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:58:27.089293    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:27.095559    4292 main.go:141] libmachine: Using SSH client type: native
	I0903 23:58:27.096182    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.126.63 22 <nil> <nil>}
	I0903 23:58:27.096182    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-477700 && echo "multinode-477700" | sudo tee /etc/hostname
	I0903 23:58:27.262583    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-477700
	
	I0903 23:58:27.262583    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:29.378627    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:29.378627    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:29.379668    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:31.838571    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:58:31.838571    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:31.844893    4292 main.go:141] libmachine: Using SSH client type: native
	I0903 23:58:31.845695    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.126.63 22 <nil> <nil>}
	I0903 23:58:31.845785    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-477700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-477700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-477700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:58:31.988046    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:58:31.988046    4292 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0903 23:58:31.988046    4292 buildroot.go:174] setting up certificates
	I0903 23:58:31.988046    4292 provision.go:84] configureAuth start
	I0903 23:58:31.988282    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:34.056883    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:34.056883    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:34.057962    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:36.554472    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:58:36.554958    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:36.555155    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:38.670976    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:38.670976    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:38.671507    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:41.171415    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:58:41.172234    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:41.172323    4292 provision.go:143] copyHostCerts
	I0903 23:58:41.172386    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0903 23:58:41.172386    4292 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0903 23:58:41.172386    4292 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0903 23:58:41.173082    4292 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0903 23:58:41.174354    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0903 23:58:41.174757    4292 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0903 23:58:41.174757    4292 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0903 23:58:41.174757    4292 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0903 23:58:41.176259    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0903 23:58:41.176259    4292 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0903 23:58:41.176259    4292 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0903 23:58:41.177076    4292 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0903 23:58:41.179388    4292 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-477700 san=[127.0.0.1 172.25.126.63 localhost minikube multinode-477700]
	I0903 23:58:41.353241    4292 provision.go:177] copyRemoteCerts
	I0903 23:58:41.364213    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:58:41.364213    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:43.378508    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:43.378508    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:43.378630    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:45.861970    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:58:45.862519    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:45.862818    4292 sshutil.go:53] new ssh client: &{IP:172.25.126.63 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0903 23:58:45.960716    4292 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.596439s)
	I0903 23:58:45.960716    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0903 23:58:45.960716    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:58:46.010787    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0903 23:58:46.011164    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0903 23:58:46.061227    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0903 23:58:46.061903    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:58:46.113811    4292 provision.go:87] duration metric: took 14.1252868s to configureAuth
	I0903 23:58:46.113864    4292 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:58:46.114929    4292 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 23:58:46.115101    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:48.194848    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:48.194848    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:48.194954    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:50.697847    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:58:50.697847    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:50.707642    4292 main.go:141] libmachine: Using SSH client type: native
	I0903 23:58:50.707642    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.126.63 22 <nil> <nil>}
	I0903 23:58:50.707642    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0903 23:58:50.854742    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0903 23:58:50.854742    4292 buildroot.go:70] root file system type: tmpfs
	I0903 23:58:50.855117    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0903 23:58:50.855117    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:52.997562    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:52.998338    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:52.998500    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:58:55.465431    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:58:55.465499    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:55.471961    4292 main.go:141] libmachine: Using SSH client type: native
	I0903 23:58:55.472559    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.126.63 22 <nil> <nil>}
	I0903 23:58:55.472715    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0903 23:58:55.628712    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0903 23:58:55.628858    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:58:57.703515    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:58:57.703515    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:58:57.703515    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:59:00.150543    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:59:00.151488    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:00.158000    4292 main.go:141] libmachine: Using SSH client type: native
	I0903 23:59:00.158262    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.126.63 22 <nil> <nil>}
	I0903 23:59:00.158262    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0903 23:59:01.517946    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0903 23:59:01.517946    4292 machine.go:96] duration metric: took 43.8880936s to provisionDockerMachine
	I0903 23:59:01.517946    4292 client.go:171] duration metric: took 1m53.3246893s to LocalClient.Create
	I0903 23:59:01.517946    4292 start.go:167] duration metric: took 1m53.3246893s to libmachine.API.Create "multinode-477700"
	I0903 23:59:01.517946    4292 start.go:293] postStartSetup for "multinode-477700" (driver="hyperv")
	I0903 23:59:01.517946    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:59:01.528949    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:59:01.528949    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:59:03.588092    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:59:03.588185    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:03.588270    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:59:06.074383    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:59:06.074383    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:06.075084    4292 sshutil.go:53] new ssh client: &{IP:172.25.126.63 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0903 23:59:06.196313    4292 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6671562s)
	I0903 23:59:06.211318    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:59:06.220862    4292 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:59:06.220862    4292 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0903 23:59:06.221523    4292 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0903 23:59:06.222836    4292 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0903 23:59:06.222836    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0903 23:59:06.235152    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:59:06.257196    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0903 23:59:06.312811    4292 start.go:296] duration metric: took 4.7947545s for postStartSetup
	I0903 23:59:06.315767    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:59:08.477508    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:59:08.477508    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:08.477717    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:59:10.983100    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:59:10.983100    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:10.984135    4292 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0903 23:59:10.987748    4292 start.go:128] duration metric: took 2m2.7999594s to createHost
	I0903 23:59:10.987907    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:59:13.011515    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:59:13.012241    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:13.012241    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:59:15.502462    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:59:15.503472    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:15.509074    4292 main.go:141] libmachine: Using SSH client type: native
	I0903 23:59:15.509710    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.126.63 22 <nil> <nil>}
	I0903 23:59:15.509765    4292 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:59:15.641225    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756943955.660698628
	
	I0903 23:59:15.641225    4292 fix.go:216] guest clock: 1756943955.660698628
	I0903 23:59:15.641225    4292 fix.go:229] Guest: 2025-09-03 23:59:15.660698628 +0000 UTC Remote: 2025-09-03 23:59:10.9878287 +0000 UTC m=+128.334421501 (delta=4.672869928s)
	I0903 23:59:15.641225    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:59:17.694133    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:59:17.695021    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:17.695060    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:59:20.148275    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:59:20.149324    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:20.156099    4292 main.go:141] libmachine: Using SSH client type: native
	I0903 23:59:20.156835    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.126.63 22 <nil> <nil>}
	I0903 23:59:20.156835    4292 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756943955
	I0903 23:59:20.307300    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Sep  3 23:59:15 UTC 2025
	
	I0903 23:59:20.307300    4292 fix.go:236] clock set: Wed Sep  3 23:59:15 UTC 2025
	 (err=<nil>)
	I0903 23:59:20.307300    4292 start.go:83] releasing machines lock for "multinode-477700", held for 2m12.1196088s
	I0903 23:59:20.307300    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:59:22.381421    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:59:22.381843    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:22.381843    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:59:24.892025    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:59:24.892025    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:24.897319    4292 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0903 23:59:24.897418    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:59:24.907815    4292 ssh_runner.go:195] Run: cat /version.json
	I0903 23:59:24.907815    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0903 23:59:27.064428    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:59:27.064428    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:27.064428    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0903 23:59:27.064938    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:27.065020    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:59:27.065020    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0903 23:59:29.704355    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:59:29.704468    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:29.704665    4292 sshutil.go:53] new ssh client: &{IP:172.25.126.63 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0903 23:59:29.724839    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0903 23:59:29.724839    4292 main.go:141] libmachine: [stderr =====>] : 
	I0903 23:59:29.725820    4292 sshutil.go:53] new ssh client: &{IP:172.25.126.63 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0903 23:59:29.796142    4292 ssh_runner.go:235] Completed: cat /version.json: (4.8882591s)
	I0903 23:59:29.808309    4292 ssh_runner.go:195] Run: systemctl --version
	I0903 23:59:29.814272    4292 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9167857s)
	W0903 23:59:29.814272    4292 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0903 23:59:29.833775    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:59:29.844239    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:59:29.856303    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:59:29.889679    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:59:29.889814    4292 start.go:495] detecting cgroup driver to use...
	I0903 23:59:29.890176    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:59:29.941400    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W0903 23:59:29.948045    4292 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0903 23:59:29.948045    4292 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0903 23:59:29.976953    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0903 23:59:29.998979    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0903 23:59:30.009563    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0903 23:59:30.043679    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 23:59:30.089717    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0903 23:59:30.123299    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0903 23:59:30.155803    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:59:30.191819    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0903 23:59:30.226934    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0903 23:59:30.264467    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0903 23:59:30.299754    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:59:30.318941    4292 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:59:30.330575    4292 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:59:30.365217    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:59:30.395746    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:59:30.614180    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0903 23:59:30.670335    4292 start.go:495] detecting cgroup driver to use...
	I0903 23:59:30.684655    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0903 23:59:30.719283    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:59:30.755412    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:59:30.802882    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:59:30.845787    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 23:59:30.883728    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0903 23:59:30.953257    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0903 23:59:30.978101    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:59:31.034137    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0903 23:59:31.059292    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0903 23:59:31.079417    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0903 23:59:31.128021    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0903 23:59:31.376962    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0903 23:59:31.611057    4292 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0903 23:59:31.611237    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0903 23:59:31.663724    4292 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 23:59:31.701018    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:59:31.915411    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 23:59:32.603682    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:59:32.645617    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0903 23:59:32.681309    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 23:59:32.722022    4292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0903 23:59:32.946237    4292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0903 23:59:33.174739    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:59:33.405732    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0903 23:59:33.471019    4292 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0903 23:59:33.512798    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:59:33.741057    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0903 23:59:33.902222    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0903 23:59:33.924639    4292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0903 23:59:33.937442    4292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0903 23:59:33.946111    4292 start.go:563] Will wait 60s for crictl version
	I0903 23:59:33.957255    4292 ssh_runner.go:195] Run: which crictl
	I0903 23:59:33.978137    4292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:59:34.026709    4292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0903 23:59:34.037502    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 23:59:34.084073    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0903 23:59:34.119935    4292 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0903 23:59:34.120073    4292 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0903 23:59:34.125274    4292 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0903 23:59:34.125274    4292 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0903 23:59:34.125274    4292 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0903 23:59:34.125274    4292 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0903 23:59:34.127870    4292 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0903 23:59:34.127870    4292 ip.go:218] interface addr: 172.25.112.1/20
	I0903 23:59:34.141783    4292 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0903 23:59:34.148243    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:59:34.172605    4292 kubeadm.go:875] updating cluster {Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.126.63 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:59:34.172812    4292 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 23:59:34.183820    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0903 23:59:34.206331    4292 docker.go:691] Got preloaded images: 
	I0903 23:59:34.206331    4292 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.0 wasn't preloaded
	I0903 23:59:34.218702    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0903 23:59:34.251111    4292 ssh_runner.go:195] Run: which lz4
	I0903 23:59:34.257405    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0903 23:59:34.270325    4292 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:59:34.276931    4292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:59:34.277394    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (353447550 bytes)
	I0903 23:59:36.282818    4292 docker.go:655] duration metric: took 2.025122s to copy over tarball
	I0903 23:59:36.295197    4292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:59:45.160268    4292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8649483s)
	I0903 23:59:45.160404    4292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:59:45.228862    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0903 23:59:45.247990    4292 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2632 bytes)
	I0903 23:59:45.297272    4292 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0903 23:59:45.331467    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:59:45.545218    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0903 23:59:47.396950    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.8511622s)
	I0903 23:59:47.406063    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0903 23:59:47.437726    4292 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0903 23:59:47.437921    4292 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:59:47.437921    4292 kubeadm.go:926] updating node { 172.25.126.63 8443 v1.34.0 docker true true} ...
	I0903 23:59:47.437921    4292 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-477700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.126.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:59:47.450048    4292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0903 23:59:47.518075    4292 cni.go:84] Creating CNI manager for ""
	I0903 23:59:47.518075    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0903 23:59:47.518075    4292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:59:47.518075    4292 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.126.63 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-477700 NodeName:multinode-477700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.126.63"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.126.63 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:59:47.518075    4292 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.126.63
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-477700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.126.63"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.126.63"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:59:47.531205    4292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:59:47.557000    4292 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:59:47.568826    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:59:47.591516    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0903 23:59:47.624406    4292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:59:47.659136    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I0903 23:59:47.706798    4292 ssh_runner.go:195] Run: grep 172.25.126.63	control-plane.minikube.internal$ /etc/hosts
	I0903 23:59:47.712438    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.126.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:59:47.748394    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:59:47.971719    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:59:48.021646    4292 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700 for IP: 172.25.126.63
	I0903 23:59:48.021708    4292 certs.go:194] generating shared ca certs ...
	I0903 23:59:48.021800    4292 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:59:48.022949    4292 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0903 23:59:48.023427    4292 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0903 23:59:48.023643    4292 certs.go:256] generating profile certs ...
	I0903 23:59:48.024315    4292 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\client.key
	I0903 23:59:48.024524    4292 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\client.crt with IP's: []
	I0903 23:59:48.403864    4292 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\client.crt ...
	I0903 23:59:48.403864    4292 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\client.crt: {Name:mke79d27397de44b4aa7490854f1518bd71bc3f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:59:48.405660    4292 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\client.key ...
	I0903 23:59:48.405660    4292 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\client.key: {Name:mkdc8511e3264d8454098e67eb28300f10a55043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:59:48.407208    4292 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.f0e78aee
	I0903 23:59:48.407208    4292 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.f0e78aee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.126.63]
	I0903 23:59:48.759853    4292 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.f0e78aee ...
	I0903 23:59:48.759853    4292 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.f0e78aee: {Name:mk5c856da86e3d9764553e4bc6646f2c970e280a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:59:48.761859    4292 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.f0e78aee ...
	I0903 23:59:48.761859    4292 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.f0e78aee: {Name:mk09b53a27cd82d61dadad86e80dfcaad9f0950f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:59:48.762805    4292 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.f0e78aee -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt
	I0903 23:59:48.781917    4292 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.f0e78aee -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key
	I0903 23:59:48.784185    4292 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key
	I0903 23:59:48.784340    4292 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.crt with IP's: []
	I0903 23:59:48.940424    4292 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.crt ...
	I0903 23:59:48.940424    4292 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.crt: {Name:mkf51ab8d70d3861327552e4563d3d6cf9ebcdd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:59:48.942433    4292 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key ...
	I0903 23:59:48.942433    4292 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key: {Name:mke832ae41a41213eb56a5bfcc3a75113d0d95a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:59:48.943636    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0903 23:59:48.943795    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0903 23:59:48.943795    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0903 23:59:48.943795    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0903 23:59:48.943795    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0903 23:59:48.944487    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0903 23:59:48.944605    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0903 23:59:48.956187    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0903 23:59:48.956408    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0903 23:59:48.957203    4292 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0903 23:59:48.957203    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0903 23:59:48.957781    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0903 23:59:48.958026    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0903 23:59:48.958343    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0903 23:59:48.958978    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0903 23:59:48.958978    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:59:48.959575    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0903 23:59:48.959757    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0903 23:59:48.961228    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:59:49.012033    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:59:49.071774    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:59:49.128026    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0903 23:59:49.178308    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0903 23:59:49.233376    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:59:49.290481    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:59:49.342724    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 23:59:49.398689    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:59:49.452653    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0903 23:59:49.507294    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0903 23:59:49.562305    4292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:59:49.612168    4292 ssh_runner.go:195] Run: openssl version
	I0903 23:59:49.635572    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:59:49.670993    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:59:49.677515    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:59:49.691598    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:59:49.713891    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:59:49.745083    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0903 23:59:49.781926    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0903 23:59:49.789643    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0903 23:59:49.802310    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0903 23:59:49.828538    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0903 23:59:49.867437    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0903 23:59:49.899809    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0903 23:59:49.907803    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0903 23:59:49.922862    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0903 23:59:49.946009    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:59:49.978899    4292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:59:49.985886    4292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:59:49.985886    4292 kubeadm.go:392] StartCluster: {Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
4.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.126.63 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:59:49.996790    4292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0903 23:59:50.033887    4292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:59:50.073410    4292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:59:50.106014    4292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:59:50.134680    4292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:59:50.134680    4292 kubeadm.go:157] found existing configuration files:
	
	I0903 23:59:50.147750    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:59:50.167364    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:59:50.178675    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:59:50.213305    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:59:50.231770    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:59:50.245479    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:59:50.281283    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:59:50.303195    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:59:50.319319    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:59:50.359225    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:59:50.382893    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:59:50.399430    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:59:50.423806    4292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:59:50.630412    4292 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 00:00:06.365046    4292 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0904 00:00:06.365159    4292 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 00:00:06.365413    4292 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 00:00:06.365601    4292 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 00:00:06.365826    4292 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 00:00:06.366064    4292 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 00:00:06.368880    4292 out.go:252]   - Generating certificates and keys ...
	I0904 00:00:06.369238    4292 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 00:00:06.369392    4292 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 00:00:06.369825    4292 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 00:00:06.370010    4292 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 00:00:06.370040    4292 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 00:00:06.370040    4292 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 00:00:06.370040    4292 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 00:00:06.370720    4292 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-477700] and IPs [172.25.126.63 127.0.0.1 ::1]
	I0904 00:00:06.370834    4292 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 00:00:06.371299    4292 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-477700] and IPs [172.25.126.63 127.0.0.1 ::1]
	I0904 00:00:06.371535    4292 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 00:00:06.371570    4292 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 00:00:06.371817    4292 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 00:00:06.371872    4292 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 00:00:06.372128    4292 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 00:00:06.372318    4292 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 00:00:06.372352    4292 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 00:00:06.372352    4292 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 00:00:06.372352    4292 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 00:00:06.373005    4292 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 00:00:06.373105    4292 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 00:00:06.378281    4292 out.go:252]   - Booting up control plane ...
	I0904 00:00:06.378281    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 00:00:06.378281    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 00:00:06.379039    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 00:00:06.379483    4292 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 00:00:06.379780    4292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0904 00:00:06.380046    4292 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0904 00:00:06.380261    4292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 00:00:06.380371    4292 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 00:00:06.380371    4292 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 00:00:06.381052    4292 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 00:00:06.381052    4292 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001780475s
	I0904 00:00:06.381052    4292 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0904 00:00:06.381704    4292 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://172.25.126.63:8443/livez
	I0904 00:00:06.381740    4292 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0904 00:00:06.381740    4292 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0904 00:00:06.382335    4292 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.201924252s
	I0904 00:00:06.382531    4292 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.462061452s
	I0904 00:00:06.382531    4292 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 8.00207395s
	I0904 00:00:06.383059    4292 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 00:00:06.383092    4292 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 00:00:06.383092    4292 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 00:00:06.383092    4292 kubeadm.go:310] [mark-control-plane] Marking the node multinode-477700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 00:00:06.383092    4292 kubeadm.go:310] [bootstrap-token] Using token: gm1tgd.7cc8f6rrk3fx97xt
	I0904 00:00:06.389672    4292 out.go:252]   - Configuring RBAC rules ...
	I0904 00:00:06.389672    4292 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 00:00:06.389672    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 00:00:06.390813    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 00:00:06.390929    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 00:00:06.390929    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 00:00:06.390929    4292 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 00:00:06.391857    4292 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 00:00:06.391857    4292 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 00:00:06.391857    4292 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 00:00:06.391857    4292 kubeadm.go:310] 
	I0904 00:00:06.391857    4292 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 00:00:06.391857    4292 kubeadm.go:310] 
	I0904 00:00:06.391857    4292 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 00:00:06.391857    4292 kubeadm.go:310] 
	I0904 00:00:06.391857    4292 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 00:00:06.391857    4292 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 00:00:06.391857    4292 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 00:00:06.391857    4292 kubeadm.go:310] 
	I0904 00:00:06.391857    4292 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 00:00:06.392916    4292 kubeadm.go:310] 
	I0904 00:00:06.392916    4292 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 00:00:06.392916    4292 kubeadm.go:310] 
	I0904 00:00:06.392916    4292 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 00:00:06.392916    4292 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 00:00:06.392916    4292 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 00:00:06.392916    4292 kubeadm.go:310] 
	I0904 00:00:06.393874    4292 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 00:00:06.393874    4292 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 00:00:06.393874    4292 kubeadm.go:310] 
	I0904 00:00:06.393874    4292 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gm1tgd.7cc8f6rrk3fx97xt \
	I0904 00:00:06.393874    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 \
	I0904 00:00:06.393874    4292 kubeadm.go:310] 	--control-plane 
	I0904 00:00:06.393874    4292 kubeadm.go:310] 
	I0904 00:00:06.393874    4292 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 00:00:06.393874    4292 kubeadm.go:310] 
	I0904 00:00:06.394909    4292 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gm1tgd.7cc8f6rrk3fx97xt \
	I0904 00:00:06.394909    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 
	I0904 00:00:06.394909    4292 cni.go:84] Creating CNI manager for ""
	I0904 00:00:06.394909    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0904 00:00:06.398251    4292 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0904 00:00:06.413382    4292 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 00:00:06.421920    4292 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 00:00:06.422009    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 00:00:06.477616    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 00:00:06.928415    4292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 00:00:06.943056    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-477700 minikube.k8s.io/updated_at=2025_09_04T00_00_06_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb minikube.k8s.io/name=multinode-477700 minikube.k8s.io/primary=true
	I0904 00:00:06.944063    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 00:00:06.970288    4292 ops.go:34] apiserver oom_adj: -16
	I0904 00:00:07.120107    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 00:00:07.618609    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 00:00:08.120863    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 00:00:08.618573    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 00:00:09.117195    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 00:00:09.618474    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 00:00:10.120242    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 00:00:10.618434    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 00:00:10.744967    4292 kubeadm.go:1105] duration metric: took 3.8164648s to wait for elevateKubeSystemPrivileges
	I0904 00:00:10.744967    4292 kubeadm.go:394] duration metric: took 20.7587925s to StartCluster
	I0904 00:00:10.744967    4292 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:00:10.744967    4292 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0904 00:00:10.747542    4292 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:00:10.748558    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 00:00:10.748558    4292 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.126.63 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 00:00:10.748558    4292 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 00:00:10.748558    4292 addons.go:69] Setting storage-provisioner=true in profile "multinode-477700"
	I0904 00:00:10.749535    4292 addons.go:238] Setting addon storage-provisioner=true in "multinode-477700"
	I0904 00:00:10.749535    4292 addons.go:69] Setting default-storageclass=true in profile "multinode-477700"
	I0904 00:00:10.749535    4292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-477700"
	I0904 00:00:10.749535    4292 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:00:10.749535    4292 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:00:10.750558    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:00:10.750558    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:00:10.757544    4292 out.go:179] * Verifying Kubernetes components...
	I0904 00:00:10.774526    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:00:10.967895    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 00:00:11.220517    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 00:00:11.867208    4292 start.go:976] {"host.minikube.internal": 172.25.112.1} host record injected into CoreDNS's ConfigMap
	I0904 00:00:11.870138    4292 kapi.go:59] client config for multinode-477700: &rest.Config{Host:"https://172.25.126.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 00:00:11.870239    4292 kapi.go:59] client config for multinode-477700: &rest.Config{Host:"https://172.25.126.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 00:00:11.873281    4292 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0904 00:00:11.873281    4292 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0904 00:00:11.873281    4292 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0904 00:00:11.873281    4292 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0904 00:00:11.873830    4292 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0904 00:00:11.873830    4292 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0904 00:00:11.874066    4292 node_ready.go:35] waiting up to 6m0s for node "multinode-477700" to be "Ready" ...
	I0904 00:00:12.379809    4292 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-477700" context rescaled to 1 replicas
	I0904 00:00:13.220914    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:00:13.221116    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:13.226885    4292 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 00:00:13.238827    4292 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 00:00:13.238891    4292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 00:00:13.239009    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:00:13.248858    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:00:13.248858    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:13.250678    4292 kapi.go:59] client config for multinode-477700: &rest.Config{Host:"https://172.25.126.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 00:00:13.251448    4292 addons.go:238] Setting addon default-storageclass=true in "multinode-477700"
	I0904 00:00:13.251448    4292 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:00:13.253046    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	W0904 00:00:13.880702    4292 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	I0904 00:00:15.616351    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:00:15.616351    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:15.616351    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:00:15.616596    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:15.616555    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:00:15.616596    4292 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 00:00:15.616685    4292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 00:00:15.616718    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	W0904 00:00:16.380879    4292 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	I0904 00:00:17.844394    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:00:17.845556    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:17.845892    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:00:18.264822    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0904 00:00:18.264822    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:18.265660    4292 sshutil.go:53] new ssh client: &{IP:172.25.126.63 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	W0904 00:00:18.381465    4292 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	I0904 00:00:18.431548    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0904 00:00:20.394914    4292 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	I0904 00:00:20.450317    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0904 00:00:20.450317    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:20.450875    4292 sshutil.go:53] new ssh client: &{IP:172.25.126.63 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:00:20.583177    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 00:00:20.794603    4292 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0904 00:00:20.799934    4292 addons.go:514] duration metric: took 10.0512362s for enable addons: enabled=[storage-provisioner default-storageclass]
	W0904 00:00:22.880189    4292 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:00:24.881597    4292 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:00:27.379234    4292 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:00:29.379413    4292 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:00:31.380784    4292 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:00:33.880640    4292 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	I0904 00:00:34.382018    4292 node_ready.go:49] node "multinode-477700" is "Ready"
	I0904 00:00:34.382241    4292 node_ready.go:38] duration metric: took 22.5078621s for node "multinode-477700" to be "Ready" ...
	I0904 00:00:34.382241    4292 api_server.go:52] waiting for apiserver process to appear ...
	I0904 00:00:34.404552    4292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:00:34.476352    4292 api_server.go:72] duration metric: took 23.727465s to wait for apiserver process to appear ...
	I0904 00:00:34.476352    4292 api_server.go:88] waiting for apiserver healthz status ...
	I0904 00:00:34.476352    4292 api_server.go:253] Checking apiserver healthz at https://172.25.126.63:8443/healthz ...
	I0904 00:00:34.493739    4292 api_server.go:279] https://172.25.126.63:8443/healthz returned 200:
	ok
	I0904 00:00:34.504352    4292 api_server.go:141] control plane version: v1.34.0
	I0904 00:00:34.504352    4292 api_server.go:131] duration metric: took 27.9987ms to wait for apiserver health ...
	I0904 00:00:34.504352    4292 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 00:00:34.519795    4292 system_pods.go:59] 8 kube-system pods found
	I0904 00:00:34.519795    4292 system_pods.go:61] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:00:34.519795    4292 system_pods.go:61] "etcd-multinode-477700" [ba2bde18-3f02-42fc-adf4-c1d258733097] Running
	I0904 00:00:34.519795    4292 system_pods.go:61] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:00:34.519795    4292 system_pods.go:61] "kube-apiserver-multinode-477700" [896a4691-b5a5-4241-9094-1e33ac8eb7c6] Running
	I0904 00:00:34.519795    4292 system_pods.go:61] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:00:34.519795    4292 system_pods.go:61] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:00:34.519795    4292 system_pods.go:61] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:00:34.519795    4292 system_pods.go:61] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 00:00:34.519795    4292 system_pods.go:74] duration metric: took 15.4437ms to wait for pod list to return data ...
	I0904 00:00:34.519795    4292 default_sa.go:34] waiting for default service account to be created ...
	I0904 00:00:34.525485    4292 default_sa.go:45] found service account: "default"
	I0904 00:00:34.525580    4292 default_sa.go:55] duration metric: took 5.7848ms for default service account to be created ...
	I0904 00:00:34.525580    4292 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 00:00:34.534929    4292 system_pods.go:86] 8 kube-system pods found
	I0904 00:00:34.534929    4292 system_pods.go:89] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:00:34.534929    4292 system_pods.go:89] "etcd-multinode-477700" [ba2bde18-3f02-42fc-adf4-c1d258733097] Running
	I0904 00:00:34.534929    4292 system_pods.go:89] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:00:34.534929    4292 system_pods.go:89] "kube-apiserver-multinode-477700" [896a4691-b5a5-4241-9094-1e33ac8eb7c6] Running
	I0904 00:00:34.534929    4292 system_pods.go:89] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:00:34.534929    4292 system_pods.go:89] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:00:34.534929    4292 system_pods.go:89] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:00:34.534929    4292 system_pods.go:89] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 00:00:34.534929    4292 retry.go:31] will retry after 312.431546ms: missing components: kube-dns
	I0904 00:00:34.857250    4292 system_pods.go:86] 8 kube-system pods found
	I0904 00:00:34.857423    4292 system_pods.go:89] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:00:34.857423    4292 system_pods.go:89] "etcd-multinode-477700" [ba2bde18-3f02-42fc-adf4-c1d258733097] Running
	I0904 00:00:34.857423    4292 system_pods.go:89] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:00:34.857458    4292 system_pods.go:89] "kube-apiserver-multinode-477700" [896a4691-b5a5-4241-9094-1e33ac8eb7c6] Running
	I0904 00:00:34.857458    4292 system_pods.go:89] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:00:34.857458    4292 system_pods.go:89] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:00:34.857458    4292 system_pods.go:89] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:00:34.857458    4292 system_pods.go:89] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 00:00:34.857545    4292 retry.go:31] will retry after 257.314833ms: missing components: kube-dns
	I0904 00:00:35.133829    4292 system_pods.go:86] 8 kube-system pods found
	I0904 00:00:35.133911    4292 system_pods.go:89] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:00:35.133911    4292 system_pods.go:89] "etcd-multinode-477700" [ba2bde18-3f02-42fc-adf4-c1d258733097] Running
	I0904 00:00:35.133911    4292 system_pods.go:89] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:00:35.133990    4292 system_pods.go:89] "kube-apiserver-multinode-477700" [896a4691-b5a5-4241-9094-1e33ac8eb7c6] Running
	I0904 00:00:35.133990    4292 system_pods.go:89] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:00:35.133990    4292 system_pods.go:89] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:00:35.133990    4292 system_pods.go:89] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:00:35.133990    4292 system_pods.go:89] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 00:00:35.133990    4292 retry.go:31] will retry after 421.435579ms: missing components: kube-dns
	I0904 00:00:35.573319    4292 system_pods.go:86] 8 kube-system pods found
	I0904 00:00:35.573382    4292 system_pods.go:89] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:00:35.573382    4292 system_pods.go:89] "etcd-multinode-477700" [ba2bde18-3f02-42fc-adf4-c1d258733097] Running
	I0904 00:00:35.573382    4292 system_pods.go:89] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:00:35.573382    4292 system_pods.go:89] "kube-apiserver-multinode-477700" [896a4691-b5a5-4241-9094-1e33ac8eb7c6] Running
	I0904 00:00:35.573382    4292 system_pods.go:89] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:00:35.573382    4292 system_pods.go:89] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:00:35.573382    4292 system_pods.go:89] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:00:35.573382    4292 system_pods.go:89] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 00:00:35.573382    4292 retry.go:31] will retry after 544.171379ms: missing components: kube-dns
	I0904 00:00:36.128509    4292 system_pods.go:86] 8 kube-system pods found
	I0904 00:00:36.128587    4292 system_pods.go:89] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:00:36.128587    4292 system_pods.go:89] "etcd-multinode-477700" [ba2bde18-3f02-42fc-adf4-c1d258733097] Running
	I0904 00:00:36.128587    4292 system_pods.go:89] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:00:36.128587    4292 system_pods.go:89] "kube-apiserver-multinode-477700" [896a4691-b5a5-4241-9094-1e33ac8eb7c6] Running
	I0904 00:00:36.128587    4292 system_pods.go:89] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:00:36.128587    4292 system_pods.go:89] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:00:36.128587    4292 system_pods.go:89] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:00:36.128587    4292 system_pods.go:89] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0904 00:00:36.128587    4292 retry.go:31] will retry after 734.503369ms: missing components: kube-dns
	I0904 00:00:36.871055    4292 system_pods.go:86] 8 kube-system pods found
	I0904 00:00:36.871123    4292 system_pods.go:89] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Running
	I0904 00:00:36.871123    4292 system_pods.go:89] "etcd-multinode-477700" [ba2bde18-3f02-42fc-adf4-c1d258733097] Running
	I0904 00:00:36.871123    4292 system_pods.go:89] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:00:36.871189    4292 system_pods.go:89] "kube-apiserver-multinode-477700" [896a4691-b5a5-4241-9094-1e33ac8eb7c6] Running
	I0904 00:00:36.871189    4292 system_pods.go:89] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:00:36.871189    4292 system_pods.go:89] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:00:36.871189    4292 system_pods.go:89] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:00:36.871189    4292 system_pods.go:89] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Running
	I0904 00:00:36.871189    4292 system_pods.go:126] duration metric: took 2.345576s to wait for k8s-apps to be running ...
	I0904 00:00:36.871189    4292 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 00:00:36.886468    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 00:00:36.916860    4292 system_svc.go:56] duration metric: took 45.5089ms WaitForService to wait for kubelet
	I0904 00:00:36.916860    4292 kubeadm.go:578] duration metric: took 26.1679382s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 00:00:36.916860    4292 node_conditions.go:102] verifying NodePressure condition ...
	I0904 00:00:36.920925    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:00:36.920925    4292 node_conditions.go:123] node cpu capacity is 2
	I0904 00:00:36.920925    4292 node_conditions.go:105] duration metric: took 4.0651ms to run NodePressure ...
	I0904 00:00:36.920925    4292 start.go:241] waiting for startup goroutines ...
	I0904 00:00:36.920925    4292 start.go:246] waiting for cluster config update ...
	I0904 00:00:36.920925    4292 start.go:255] writing updated cluster config ...
	I0904 00:00:36.925751    4292 out.go:203] 
	I0904 00:00:36.930598    4292 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:00:36.938379    4292 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:00:36.939388    4292 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:00:36.949045    4292 out.go:179] * Starting "multinode-477700-m02" worker node in "multinode-477700" cluster
	I0904 00:00:36.951780    4292 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0904 00:00:36.951780    4292 cache.go:58] Caching tarball of preloaded images
	I0904 00:00:36.952337    4292 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0904 00:00:36.952449    4292 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0904 00:00:36.952449    4292 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:00:36.955461    4292 start.go:360] acquireMachinesLock for multinode-477700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 00:00:36.955461    4292 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-477700-m02"
	I0904 00:00:36.955461    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.126.63 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0904 00:00:36.956489    4292 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0904 00:00:36.959716    4292 out.go:252] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0904 00:00:36.960102    4292 start.go:159] libmachine.API.Create for "multinode-477700" (driver="hyperv")
	I0904 00:00:36.960102    4292 client.go:168] LocalClient.Create starting
	I0904 00:00:36.960458    4292 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0904 00:00:36.961057    4292 main.go:141] libmachine: Decoding PEM data...
	I0904 00:00:36.961057    4292 main.go:141] libmachine: Parsing certificate...
	I0904 00:00:36.961230    4292 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0904 00:00:36.961562    4292 main.go:141] libmachine: Decoding PEM data...
	I0904 00:00:36.961562    4292 main.go:141] libmachine: Parsing certificate...
	I0904 00:00:36.961783    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0904 00:00:38.926343    4292 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0904 00:00:38.926343    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:38.926756    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0904 00:00:40.704208    4292 main.go:141] libmachine: [stdout =====>] : False
	
	I0904 00:00:40.704208    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:40.704913    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0904 00:00:42.268535    4292 main.go:141] libmachine: [stdout =====>] : True
	
	I0904 00:00:42.268535    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:42.268535    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0904 00:00:46.184810    4292 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0904 00:00:46.184957    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:46.187124    4292 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0904 00:00:46.921471    4292 main.go:141] libmachine: Creating SSH key...
	I0904 00:00:47.700069    4292 main.go:141] libmachine: Creating VM...
	I0904 00:00:47.700069    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0904 00:00:50.698159    4292 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0904 00:00:50.698159    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:50.698159    4292 main.go:141] libmachine: Using switch "Default Switch"
	I0904 00:00:50.698159    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0904 00:00:52.554168    4292 main.go:141] libmachine: [stdout =====>] : True
	
	I0904 00:00:52.554168    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:52.554168    4292 main.go:141] libmachine: Creating VHD
	I0904 00:00:52.554300    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0904 00:00:56.271569    4292 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D2792E39-9AC3-4400-87E4-C385506ADD01
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0904 00:00:56.271569    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:56.271868    4292 main.go:141] libmachine: Writing magic tar header
	I0904 00:00:56.271868    4292 main.go:141] libmachine: Writing SSH key tar header
	I0904 00:00:56.286317    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0904 00:00:59.473986    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:00:59.473986    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:00:59.474115    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\disk.vhd' -SizeBytes 20000MB
	I0904 00:01:02.159750    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:01:02.160828    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:02.160906    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-477700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0904 00:01:05.919493    4292 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-477700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0904 00:01:05.919493    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:05.919493    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-477700-m02 -DynamicMemoryEnabled $false
	I0904 00:01:08.155214    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:01:08.155934    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:08.156008    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-477700-m02 -Count 2
	I0904 00:01:10.323400    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:01:10.323400    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:10.324001    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-477700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\boot2docker.iso'
	I0904 00:01:12.914072    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:01:12.914072    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:12.914072    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-477700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\disk.vhd'
	I0904 00:01:15.613255    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:01:15.613480    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:15.613480    4292 main.go:141] libmachine: Starting VM...
	I0904 00:01:15.613553    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-477700-m02
	I0904 00:01:18.925585    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:01:18.925585    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:18.925585    4292 main.go:141] libmachine: Waiting for host to start...
	I0904 00:01:18.925585    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:01:21.230865    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:01:21.230865    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:21.230865    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:01:23.986024    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:01:23.986024    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:24.987592    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:01:27.200085    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:01:27.200085    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:27.200085    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:01:29.757170    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:01:29.758151    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:30.758669    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:01:33.038390    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:01:33.038390    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:33.038390    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:01:35.600504    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:01:35.600575    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:36.600997    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:01:38.868154    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:01:38.868239    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:38.868406    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:01:41.377650    4292 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:01:41.377726    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:42.378290    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:01:44.572854    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:01:44.572854    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:44.572854    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:01:47.227049    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:01:47.227699    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:47.227750    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:01:49.359609    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:01:49.359690    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:49.359724    4292 machine.go:93] provisionDockerMachine start ...
	I0904 00:01:49.359821    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:01:51.522872    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:01:51.522872    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:51.522872    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:01:54.173377    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:01:54.173377    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:54.181826    4292 main.go:141] libmachine: Using SSH client type: native
	I0904 00:01:54.197876    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.125.181 22 <nil> <nil>}
	I0904 00:01:54.197876    4292 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 00:01:54.332956    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 00:01:54.332999    4292 buildroot.go:166] provisioning hostname "multinode-477700-m02"
	I0904 00:01:54.333070    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:01:56.419728    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:01:56.420751    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:56.420996    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:01:58.905003    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:01:58.905003    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:01:58.909880    4292 main.go:141] libmachine: Using SSH client type: native
	I0904 00:01:58.910549    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.125.181 22 <nil> <nil>}
	I0904 00:01:58.910549    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-477700-m02 && echo "multinode-477700-m02" | sudo tee /etc/hostname
	I0904 00:01:59.067618    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-477700-m02
	
	I0904 00:01:59.067618    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:01.202185    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:01.202185    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:01.203116    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:03.764565    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:03.764920    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:03.771157    4292 main.go:141] libmachine: Using SSH client type: native
	I0904 00:02:03.771948    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.125.181 22 <nil> <nil>}
	I0904 00:02:03.772010    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-477700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-477700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-477700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 00:02:03.924846    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 00:02:03.924966    4292 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0904 00:02:03.924966    4292 buildroot.go:174] setting up certificates
	I0904 00:02:03.925066    4292 provision.go:84] configureAuth start
	I0904 00:02:03.925066    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:06.020825    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:06.020825    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:06.021594    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:08.595830    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:08.596830    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:08.596830    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:10.715114    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:10.715114    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:10.715114    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:13.213441    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:13.213535    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:13.213580    4292 provision.go:143] copyHostCerts
	I0904 00:02:13.213758    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0904 00:02:13.213947    4292 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0904 00:02:13.213947    4292 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0904 00:02:13.214676    4292 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0904 00:02:13.215837    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0904 00:02:13.216447    4292 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0904 00:02:13.216447    4292 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0904 00:02:13.217045    4292 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0904 00:02:13.217770    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0904 00:02:13.218467    4292 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0904 00:02:13.218510    4292 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0904 00:02:13.218571    4292 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0904 00:02:13.219966    4292 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-477700-m02 san=[127.0.0.1 172.25.125.181 localhost minikube multinode-477700-m02]
	I0904 00:02:13.749880    4292 provision.go:177] copyRemoteCerts
	I0904 00:02:13.762742    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 00:02:13.762864    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:15.901736    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:15.902289    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:15.902289    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:18.429166    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:18.429166    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:18.429166    4292 sshutil.go:53] new ssh client: &{IP:172.25.125.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:02:18.538707    4292 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7758992s)
	I0904 00:02:18.538707    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0904 00:02:18.539242    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 00:02:18.600402    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0904 00:02:18.600894    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0904 00:02:18.662157    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0904 00:02:18.662218    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 00:02:18.722113    4292 provision.go:87] duration metric: took 14.7968427s to configureAuth
	I0904 00:02:18.722250    4292 buildroot.go:189] setting minikube options for container-runtime
	I0904 00:02:18.722869    4292 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:02:18.722995    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:20.839846    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:20.839935    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:20.839973    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:23.469207    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:23.469207    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:23.476197    4292 main.go:141] libmachine: Using SSH client type: native
	I0904 00:02:23.476197    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.125.181 22 <nil> <nil>}
	I0904 00:02:23.476805    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0904 00:02:23.618231    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0904 00:02:23.618231    4292 buildroot.go:70] root file system type: tmpfs
	I0904 00:02:23.618462    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0904 00:02:23.618462    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:25.756467    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:25.756994    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:25.757141    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:28.277590    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:28.277909    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:28.284951    4292 main.go:141] libmachine: Using SSH client type: native
	I0904 00:02:28.285670    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.125.181 22 <nil> <nil>}
	I0904 00:02:28.285670    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.25.126.63"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0904 00:02:28.450225    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.25.126.63
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0904 00:02:28.450414    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:30.543572    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:30.544071    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:30.544071    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:33.007270    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:33.008314    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:33.014632    4292 main.go:141] libmachine: Using SSH client type: native
	I0904 00:02:33.015865    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.125.181 22 <nil> <nil>}
	I0904 00:02:33.015865    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0904 00:02:34.442236    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0904 00:02:34.442236    4292 machine.go:96] duration metric: took 45.08189s to provisionDockerMachine
	I0904 00:02:34.442236    4292 client.go:171] duration metric: took 1m57.4805089s to LocalClient.Create
	I0904 00:02:34.442236    4292 start.go:167] duration metric: took 1m57.4805089s to libmachine.API.Create "multinode-477700"
	I0904 00:02:34.442236    4292 start.go:293] postStartSetup for "multinode-477700-m02" (driver="hyperv")
	I0904 00:02:34.442795    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 00:02:34.463544    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 00:02:34.463544    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:36.560776    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:36.560776    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:36.560899    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:39.085951    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:39.086138    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:39.086469    4292 sshutil.go:53] new ssh client: &{IP:172.25.125.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:02:39.194203    4292 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7305946s)
	I0904 00:02:39.210299    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 00:02:39.218790    4292 info.go:137] Remote host: Buildroot 2025.02
	I0904 00:02:39.218790    4292 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0904 00:02:39.219794    4292 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0904 00:02:39.220764    4292 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0904 00:02:39.220764    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0904 00:02:39.232759    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 00:02:39.253705    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0904 00:02:39.310669    4292 start.go:296] duration metric: took 4.8683657s for postStartSetup
	I0904 00:02:39.313808    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:41.421411    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:41.421411    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:41.422019    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:43.908664    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:43.909412    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:43.909491    4292 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:02:43.912272    4292 start.go:128] duration metric: took 2m6.9539701s to createHost
	I0904 00:02:43.912272    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:46.018481    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:46.018481    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:46.018713    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:48.528021    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:48.528235    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:48.535455    4292 main.go:141] libmachine: Using SSH client type: native
	I0904 00:02:48.536015    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.125.181 22 <nil> <nil>}
	I0904 00:02:48.536015    4292 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 00:02:48.667675    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756944168.679918283
	
	I0904 00:02:48.667782    4292 fix.go:216] guest clock: 1756944168.679918283
	I0904 00:02:48.667782    4292 fix.go:229] Guest: 2025-09-04 00:02:48.679918283 +0000 UTC Remote: 2025-09-04 00:02:43.9122722 +0000 UTC m=+341.255914501 (delta=4.767646083s)
	I0904 00:02:48.667863    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:50.773271    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:50.773271    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:50.773271    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:53.319448    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:53.319448    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:53.326577    4292 main.go:141] libmachine: Using SSH client type: native
	I0904 00:02:53.327155    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.125.181 22 <nil> <nil>}
	I0904 00:02:53.327155    4292 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756944168
	I0904 00:02:53.475884    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Sep  4 00:02:48 UTC 2025
	
	I0904 00:02:53.475884    4292 fix.go:236] clock set: Thu Sep  4 00:02:48 UTC 2025
	 (err=<nil>)
	I0904 00:02:53.475884    4292 start.go:83] releasing machines lock for "multinode-477700-m02", held for 2m16.5185348s
	I0904 00:02:53.476234    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:55.568107    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:02:55.568702    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:55.568853    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:02:58.068136    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:02:58.068136    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:02:58.071912    4292 out.go:179] * Found network options:
	I0904 00:02:58.074838    4292 out.go:179]   - NO_PROXY=172.25.126.63
	W0904 00:02:58.077441    4292 proxy.go:120] fail to check proxy env: Error ip not in block
	I0904 00:02:58.080276    4292 out.go:179]   - NO_PROXY=172.25.126.63
	W0904 00:02:58.083442    4292 proxy.go:120] fail to check proxy env: Error ip not in block
	W0904 00:02:58.084347    4292 proxy.go:120] fail to check proxy env: Error ip not in block
	I0904 00:02:58.086925    4292 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0904 00:02:58.086925    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:02:58.096287    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 00:02:58.096287    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:03:00.255612    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:03:00.256129    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:03:00.256129    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:03:00.257493    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:03:00.257493    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:03:00.257688    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:03:02.910055    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:03:02.911135    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:03:02.911135    4292 sshutil.go:53] new ssh client: &{IP:172.25.125.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:03:02.934200    4292 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:03:02.935251    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:03:02.935567    4292 sshutil.go:53] new ssh client: &{IP:172.25.125.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:03:03.003154    4292 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9161609s)
	W0904 00:03:03.003154    4292 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0904 00:03:03.039209    4292 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9428535s)
	W0904 00:03:03.039807    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 00:03:03.055224    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 00:03:03.089507    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 00:03:03.089543    4292 start.go:495] detecting cgroup driver to use...
	I0904 00:03:03.089601    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 00:03:03.140094    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0904 00:03:03.176300    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0904 00:03:03.198504    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 00:03:03.211394    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 00:03:03.245022    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 00:03:03.282015    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W0904 00:03:03.298676    4292 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0904 00:03:03.298676    4292 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0904 00:03:03.320521    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 00:03:03.353419    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 00:03:03.391658    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 00:03:03.426040    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 00:03:03.461007    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 00:03:03.495013    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 00:03:03.515178    4292 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 00:03:03.528160    4292 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 00:03:03.566675    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 00:03:03.594650    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:03:03.827613    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 00:03:03.887913    4292 start.go:495] detecting cgroup driver to use...
	I0904 00:03:03.899240    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0904 00:03:03.934495    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 00:03:03.971065    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 00:03:04.015473    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 00:03:04.056801    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 00:03:04.093379    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 00:03:04.150971    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 00:03:04.176199    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 00:03:04.228450    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0904 00:03:04.246383    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0904 00:03:04.269664    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0904 00:03:04.317232    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0904 00:03:04.557554    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0904 00:03:04.771293    4292 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0904 00:03:04.771293    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0904 00:03:04.823313    4292 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0904 00:03:04.859107    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:03:05.083441    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 00:03:05.261875    4292 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0904 00:03:05.295873    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 00:03:05.332681    4292 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0904 00:03:05.374745    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:03:05.621943    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 00:03:06.717545    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0955875s)
	I0904 00:03:06.717990    4292 retry.go:31] will retry after 520.498524ms: docker not running
	I0904 00:03:07.251424    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 00:03:07.292326    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0904 00:03:07.329105    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 00:03:07.366771    4292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0904 00:03:07.639636    4292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0904 00:03:07.858664    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:03:08.084737    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0904 00:03:08.155544    4292 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0904 00:03:08.193759    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:03:08.423310    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0904 00:03:08.582354    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 00:03:08.607929    4292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0904 00:03:08.619955    4292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0904 00:03:08.630155    4292 start.go:563] Will wait 60s for crictl version
	I0904 00:03:08.642143    4292 ssh_runner.go:195] Run: which crictl
	I0904 00:03:08.659132    4292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 00:03:08.716723    4292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0904 00:03:08.727571    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 00:03:08.773435    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 00:03:08.812444    4292 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0904 00:03:08.814410    4292 out.go:179]   - env NO_PROXY=172.25.126.63
	I0904 00:03:08.820951    4292 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0904 00:03:08.824934    4292 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0904 00:03:08.824934    4292 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0904 00:03:08.824934    4292 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0904 00:03:08.824934    4292 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0904 00:03:08.827161    4292 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0904 00:03:08.828195    4292 ip.go:218] interface addr: 172.25.112.1/20
	I0904 00:03:08.840898    4292 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0904 00:03:08.848470    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 00:03:08.872279    4292 mustload.go:65] Loading cluster: multinode-477700
	I0904 00:03:08.873062    4292 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:03:08.873846    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:03:10.932830    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:03:10.933487    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:03:10.933487    4292 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:03:10.934093    4292 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700 for IP: 172.25.125.181
	I0904 00:03:10.934255    4292 certs.go:194] generating shared ca certs ...
	I0904 00:03:10.934255    4292 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:03:10.934776    4292 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0904 00:03:10.935295    4292 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0904 00:03:10.935453    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0904 00:03:10.935698    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0904 00:03:10.935937    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0904 00:03:10.936088    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0904 00:03:10.936692    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0904 00:03:10.937129    4292 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0904 00:03:10.937386    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0904 00:03:10.937674    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0904 00:03:10.937985    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0904 00:03:10.938336    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0904 00:03:10.938364    4292 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0904 00:03:10.939307    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:03:10.939536    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0904 00:03:10.939681    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0904 00:03:10.939894    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 00:03:10.994540    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 00:03:11.053446    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 00:03:11.105214    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 00:03:11.164377    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 00:03:11.218671    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0904 00:03:11.273036    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0904 00:03:11.336223    4292 ssh_runner.go:195] Run: openssl version
	I0904 00:03:11.358905    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0904 00:03:11.392424    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0904 00:03:11.399929    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0904 00:03:11.411257    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0904 00:03:11.432959    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 00:03:11.466147    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 00:03:11.499544    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:03:11.509353    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:03:11.520301    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:03:11.542263    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 00:03:11.574954    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0904 00:03:11.606981    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0904 00:03:11.614232    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0904 00:03:11.626253    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0904 00:03:11.649896    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0904 00:03:11.684233    4292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 00:03:11.691877    4292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 00:03:11.691877    4292 kubeadm.go:926] updating node {m02 172.25.125.181 8443 v1.34.0 docker false true} ...
	I0904 00:03:11.691877    4292 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-477700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.125.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 00:03:11.705113    4292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 00:03:11.728207    4292 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.0': No such file or directory
	
	Initiating transfer...
	I0904 00:03:11.736521    4292 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.0
	I0904 00:03:11.762917    4292 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubelet.sha256
	I0904 00:03:11.762917    4292 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
	I0904 00:03:11.762917    4292 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubeadm.sha256
	I0904 00:03:11.762917    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm -> /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0904 00:03:11.762917    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl -> /var/lib/minikube/binaries/v1.34.0/kubectl
	I0904 00:03:11.780177    4292 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl
	I0904 00:03:11.780177    4292 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm
	I0904 00:03:11.780177    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 00:03:11.787979    4292 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubeadm': No such file or directory
	I0904 00:03:11.788098    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubeadm --> /var/lib/minikube/binaries/v1.34.0/kubeadm (74027192 bytes)
	I0904 00:03:11.793513    4292 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubectl': No such file or directory
	I0904 00:03:11.793513    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubectl --> /var/lib/minikube/binaries/v1.34.0/kubectl (60559544 bytes)
	I0904 00:03:11.842734    4292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet -> /var/lib/minikube/binaries/v1.34.0/kubelet
	I0904 00:03:11.859383    4292 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet
	I0904 00:03:11.915750    4292 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.0/kubelet': No such file or directory
	I0904 00:03:11.915750    4292 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.34.0/kubelet --> /var/lib/minikube/binaries/v1.34.0/kubelet (59195684 bytes)
	I0904 00:03:13.194052    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0904 00:03:13.215008    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0904 00:03:13.251429    4292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 00:03:13.307114    4292 ssh_runner.go:195] Run: grep 172.25.126.63	control-plane.minikube.internal$ /etc/hosts
	I0904 00:03:13.315195    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.126.63	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 00:03:13.354420    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:03:13.608696    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 00:03:13.664943    4292 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:03:13.665865    4292 start.go:317] joinCluster: &{Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.126.63 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 00:03:13.665865    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0904 00:03:13.665865    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:03:15.798954    4292 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:03:15.798954    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:03:15.799471    4292 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:03:18.341291    4292 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0904 00:03:18.341291    4292 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:03:18.342413    4292 sshutil.go:53] new ssh client: &{IP:172.25.126.63 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:03:18.531578    4292 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8656451s)
	I0904 00:03:18.531713    4292 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0904 00:03:18.531807    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1bhk3i.p13wzcvgl3d07j5j --discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-477700-m02"
	I0904 00:03:20.658261    4292 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1bhk3i.p13wzcvgl3d07j5j --discovery-token-ca-cert-hash sha256:461028e7d31446a9db54ef88db35928fa51812dbcfd2f42c8a70c32665923137 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-477700-m02": (2.1264253s)
	I0904 00:03:20.658261    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0904 00:03:21.138478    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-477700-m02 minikube.k8s.io/updated_at=2025_09_04T00_03_21_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb minikube.k8s.io/name=multinode-477700 minikube.k8s.io/primary=false
	I0904 00:03:21.270089    4292 start.go:319] duration metric: took 7.6040371s to joinCluster
	I0904 00:03:21.270232    4292 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0904 00:03:21.270860    4292 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:03:21.273321    4292 out.go:179] * Verifying Kubernetes components...
	I0904 00:03:21.289517    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:03:21.534507    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 00:03:21.563364    4292 kapi.go:59] client config for multinode-477700: &rest.Config{Host:"https://172.25.126.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 00:03:21.564671    4292 node_ready.go:35] waiting up to 6m0s for node "multinode-477700-m02" to be "Ready" ...
	W0904 00:03:23.570197    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:26.070165    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:28.070745    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:30.570093    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:33.070211    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:35.070535    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:37.072178    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:39.570525    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:42.069817    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:44.070452    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:46.570224    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:48.570543    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:50.570981    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	W0904 00:03:53.073405    4292 node_ready.go:57] node "multinode-477700-m02" has "Ready":"False" status (will retry)
	I0904 00:03:54.070541    4292 node_ready.go:49] node "multinode-477700-m02" is "Ready"
	I0904 00:03:54.070634    4292 node_ready.go:38] duration metric: took 32.5055157s for node "multinode-477700-m02" to be "Ready" ...
	I0904 00:03:54.070724    4292 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 00:03:54.082787    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 00:03:54.117200    4292 system_svc.go:56] duration metric: took 46.3878ms WaitForService to wait for kubelet
	I0904 00:03:54.117200    4292 kubeadm.go:578] duration metric: took 32.8463665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 00:03:54.117266    4292 node_conditions.go:102] verifying NodePressure condition ...
	I0904 00:03:54.127993    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:03:54.128085    4292 node_conditions.go:123] node cpu capacity is 2
	I0904 00:03:54.128198    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:03:54.128198    4292 node_conditions.go:123] node cpu capacity is 2
	I0904 00:03:54.128198    4292 node_conditions.go:105] duration metric: took 10.9316ms to run NodePressure ...
	I0904 00:03:54.128198    4292 start.go:241] waiting for startup goroutines ...
	I0904 00:03:54.128272    4292 start.go:255] writing updated cluster config ...
	I0904 00:03:54.141859    4292 ssh_runner.go:195] Run: rm -f paused
	I0904 00:03:54.149380    4292 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 00:03:54.150986    4292 kapi.go:59] client config for multinode-477700: &rest.Config{Host:"https://172.25.126.63:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 00:03:54.158754    4292 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mg9nc" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:54.167633    4292 pod_ready.go:94] pod "coredns-66bc5c9577-mg9nc" is "Ready"
	I0904 00:03:54.167633    4292 pod_ready.go:86] duration metric: took 8.8784ms for pod "coredns-66bc5c9577-mg9nc" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:54.173804    4292 pod_ready.go:83] waiting for pod "etcd-multinode-477700" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:54.181529    4292 pod_ready.go:94] pod "etcd-multinode-477700" is "Ready"
	I0904 00:03:54.181586    4292 pod_ready.go:86] duration metric: took 7.7823ms for pod "etcd-multinode-477700" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:54.185244    4292 pod_ready.go:83] waiting for pod "kube-apiserver-multinode-477700" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:54.191559    4292 pod_ready.go:94] pod "kube-apiserver-multinode-477700" is "Ready"
	I0904 00:03:54.191559    4292 pod_ready.go:86] duration metric: took 6.2557ms for pod "kube-apiserver-multinode-477700" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:54.195270    4292 pod_ready.go:83] waiting for pod "kube-controller-manager-multinode-477700" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:54.352424    4292 request.go:683] "Waited before sending request" delay="157.0442ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.126.63:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-477700"
	I0904 00:03:54.552120    4292 request.go:683] "Waited before sending request" delay="193.5321ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.126.63:8443/api/v1/nodes/multinode-477700"
	I0904 00:03:54.557223    4292 pod_ready.go:94] pod "kube-controller-manager-multinode-477700" is "Ready"
	I0904 00:03:54.557290    4292 pod_ready.go:86] duration metric: took 361.8939ms for pod "kube-controller-manager-multinode-477700" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:54.752459    4292 request.go:683] "Waited before sending request" delay="195.0861ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.126.63:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0904 00:03:54.757106    4292 pod_ready.go:83] waiting for pod "kube-proxy-lnh8p" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:54.952793    4292 request.go:683] "Waited before sending request" delay="195.6845ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.126.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lnh8p"
	I0904 00:03:55.152489    4292 request.go:683] "Waited before sending request" delay="195.1199ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.126.63:8443/api/v1/nodes/multinode-477700-m02"
	I0904 00:03:55.157519    4292 pod_ready.go:94] pod "kube-proxy-lnh8p" is "Ready"
	I0904 00:03:55.157519    4292 pod_ready.go:86] duration metric: took 400.4077ms for pod "kube-proxy-lnh8p" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:55.157519    4292 pod_ready.go:83] waiting for pod "kube-proxy-v9bfx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:55.352723    4292 request.go:683] "Waited before sending request" delay="195.2017ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.126.63:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v9bfx"
	I0904 00:03:55.552194    4292 request.go:683] "Waited before sending request" delay="194.1429ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.126.63:8443/api/v1/nodes/multinode-477700"
	I0904 00:03:55.569079    4292 pod_ready.go:94] pod "kube-proxy-v9bfx" is "Ready"
	I0904 00:03:55.569079    4292 pod_ready.go:86] duration metric: took 411.554ms for pod "kube-proxy-v9bfx" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:55.752546    4292 request.go:683] "Waited before sending request" delay="183.4647ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.126.63:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0904 00:03:55.758306    4292 pod_ready.go:83] waiting for pod "kube-scheduler-multinode-477700" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:55.952678    4292 request.go:683] "Waited before sending request" delay="194.1968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.126.63:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-477700"
	I0904 00:03:56.152125    4292 request.go:683] "Waited before sending request" delay="194.6213ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://172.25.126.63:8443/api/v1/nodes/multinode-477700"
	I0904 00:03:56.156842    4292 pod_ready.go:94] pod "kube-scheduler-multinode-477700" is "Ready"
	I0904 00:03:56.156923    4292 pod_ready.go:86] duration metric: took 398.5412ms for pod "kube-scheduler-multinode-477700" in "kube-system" namespace to be "Ready" or be gone ...
	I0904 00:03:56.156923    4292 pod_ready.go:40] duration metric: took 2.0074543s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0904 00:03:56.287267    4292 start.go:617] kubectl: 1.34.0, cluster: 1.34.0 (minor skew: 0)
	I0904 00:03:56.291253    4292 out.go:179] * Done! kubectl is now configured to use "multinode-477700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 03 23:59:45 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:45.744749078Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 03 23:59:46 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:46.495559966Z" level=info msg="Loading containers: start."
	Sep 03 23:59:46 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:46.688631257Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 03 23:59:46 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:46.820674907Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count 01895f373892af2d781f5e4e706231f649d329611259a8bdda06b0517a5b2641], retrying...."
	Sep 03 23:59:47 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:47.361902360Z" level=info msg="Loading containers: done."
	Sep 03 23:59:47 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:47.386946033Z" level=info msg="Docker daemon" commit=e77ff99 containerd-snapshotter=false storage-driver=overlay2 version=28.3.2
	Sep 03 23:59:47 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:47.386999334Z" level=info msg="Initializing buildkit"
	Sep 03 23:59:47 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:47.414249567Z" level=info msg="Completed buildkit initialization"
	Sep 03 23:59:47 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:47.423144006Z" level=info msg="Daemon has completed initialization"
	Sep 03 23:59:47 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:47.423315111Z" level=info msg="API listen on /run/docker.sock"
	Sep 03 23:59:47 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:47.423374212Z" level=info msg="API listen on [::]:2376"
	Sep 03 23:59:47 multinode-477700 dockerd[1775]: time="2025-09-03T23:59:47.423400813Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 03 23:59:47 multinode-477700 systemd[1]: Started Docker Application Container Engine.
	Sep 03 23:59:57 multinode-477700 cri-dockerd[1640]: time="2025-09-03T23:59:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/be2ad3b809d0ce9c07ebbdaf52240a6c14e81d7434db537a1f29338cea500d5a/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 23:59:57 multinode-477700 cri-dockerd[1640]: time="2025-09-03T23:59:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2706c7084c7d658714a1296970f56cc2c6195e108a92760ee721bb07fbb9f25/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 23:59:57 multinode-477700 cri-dockerd[1640]: time="2025-09-03T23:59:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9b5837c04c52b7973e89c382685db5ffa2066fe6ea05a30b6c53943bdace558c/resolv.conf as [nameserver 172.25.112.1]"
	Sep 03 23:59:58 multinode-477700 cri-dockerd[1640]: time="2025-09-03T23:59:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8b34bc6a82c97151717e1c54ae81d77c187f8c405d18031e1e0d8d283fca4c15/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:00:10 multinode-477700 cri-dockerd[1640]: time="2025-09-04T00:00:10Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 04 00:00:12 multinode-477700 cri-dockerd[1640]: time="2025-09-04T00:00:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/71185e7e5e3a73f7b5bc3c54f0b78f0e94d529f469820027530534bdce11aa06/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:00:12 multinode-477700 cri-dockerd[1640]: time="2025-09-04T00:00:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4c1d437a10c4c6b22fc35984bdce198fa734fffdc46629cab5f947ab23fc4330/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:00:19 multinode-477700 cri-dockerd[1640]: time="2025-09-04T00:00:19Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 04 00:00:34 multinode-477700 cri-dockerd[1640]: time="2025-09-04T00:00:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/882d6e338723d7cd04e223a8df9093f1e5b39a41416a7bdb7104487e3061a0e8/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:00:35 multinode-477700 cri-dockerd[1640]: time="2025-09-04T00:00:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7ec79c04c516bf3820f15e424313b92e2b7363fdec1920819d1beb3e385c6690/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:04:21 multinode-477700 cri-dockerd[1640]: time="2025-09-04T00:04:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/076e3b0b4e95f7f9aa733bf01a48e77770208afcd20307559262a179e3dcd165/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 04 00:04:23 multinode-477700 cri-dockerd[1640]: time="2025-09-04T00:04:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	316321453cf2b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   48 seconds ago      Running             busybox                   0                   076e3b0b4e95f       busybox-7b57f96db7-bj95n
	89b7640b7697a       52546a367cc9e                                                                                         4 minutes ago       Running             coredns                   0                   882d6e338723d       coredns-66bc5c9577-mg9nc
	cd3b66b73cb4b       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   7ec79c04c516b       storage-provisioner
	3dd1de2460602       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              4 minutes ago       Running             kindnet-cni               0                   4c1d437a10c4c       kindnet-gdpss
	a5c4aad9ef6fa       df0860106674d                                                                                         4 minutes ago       Running             kube-proxy                0                   71185e7e5e3a7       kube-proxy-v9bfx
	0545be46c0c92       5f1f5298c888d                                                                                         5 minutes ago       Running             etcd                      0                   8b34bc6a82c97       etcd-multinode-477700
	944ecb4902689       a0af72f2ec6d6                                                                                         5 minutes ago       Running             kube-controller-manager   0                   9b5837c04c52b       kube-controller-manager-multinode-477700
	2b011dd581a49       46169d968e920                                                                                         5 minutes ago       Running             kube-scheduler            0                   e2706c7084c7d       kube-scheduler-multinode-477700
	774d3869c70e5       90550c43ad2bc                                                                                         5 minutes ago       Running             kube-apiserver            0                   be2ad3b809d0c       kube-apiserver-multinode-477700
	
	
	==> coredns [89b7640b7697] <==
	[INFO] 10.244.0.3:59293 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000171603s
	[INFO] 10.244.1.2:57479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230503s
	[INFO] 10.244.1.2:32790 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074001s
	[INFO] 10.244.1.2:53178 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000378305s
	[INFO] 10.244.1.2:44826 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104201s
	[INFO] 10.244.1.2:59967 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000168702s
	[INFO] 10.244.1.2:36824 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000274004s
	[INFO] 10.244.1.2:56069 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092801s
	[INFO] 10.244.1.2:42000 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095701s
	[INFO] 10.244.0.3:60492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234304s
	[INFO] 10.244.0.3:49587 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128502s
	[INFO] 10.244.0.3:41537 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000614908s
	[INFO] 10.244.0.3:41562 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059901s
	[INFO] 10.244.1.2:33339 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141102s
	[INFO] 10.244.1.2:37904 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153002s
	[INFO] 10.244.1.2:43813 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107001s
	[INFO] 10.244.1.2:36152 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169902s
	[INFO] 10.244.0.3:59535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139802s
	[INFO] 10.244.0.3:56781 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176202s
	[INFO] 10.244.0.3:40076 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000151002s
	[INFO] 10.244.0.3:43241 - 5 "PTR IN 1.112.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000274004s
	[INFO] 10.244.1.2:46944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268504s
	[INFO] 10.244.1.2:35091 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126902s
	[INFO] 10.244.1.2:40051 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135702s
	[INFO] 10.244.1.2:46583 - 5 "PTR IN 1.112.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069501s
	
	
	==> describe nodes <==
	Name:               multinode-477700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-477700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=multinode-477700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T00_00_06_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 00:00:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-477700
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 00:05:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 00:04:41 +0000   Wed, 03 Sep 2025 23:59:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 00:04:41 +0000   Wed, 03 Sep 2025 23:59:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 00:04:41 +0000   Wed, 03 Sep 2025 23:59:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 00:04:41 +0000   Thu, 04 Sep 2025 00:00:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.126.63
	  Hostname:    multinode-477700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 dec7ce3ebc4e4930a73000785ecfeeda
	  System UUID:                ce975b69-0775-4046-ad71-2f0d48df367a
	  Boot ID:                    42829445-0964-4a1d-bc38-77fa1badbab8
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-bj95n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 coredns-66bc5c9577-mg9nc                    100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     5m
	  kube-system                 etcd-multinode-477700                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         5m5s
	  kube-system                 kindnet-gdpss                               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      5m
	  kube-system                 kube-apiserver-multinode-477700             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-controller-manager-multinode-477700    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-proxy-v9bfx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-scheduler-multinode-477700             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (7%)  220Mi (7%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m58s  kube-proxy       
	  Normal  Starting                 5m6s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m5s   kubelet          Node multinode-477700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s   kubelet          Node multinode-477700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s   kubelet          Node multinode-477700 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m1s   node-controller  Node multinode-477700 event: Registered Node multinode-477700 in Controller
	  Normal  NodeReady                4m37s  kubelet          Node multinode-477700 status is now: NodeReady
	
	
	Name:               multinode-477700-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-477700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=multinode-477700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_04T00_03_21_0700
	                    minikube.k8s.io/version=v1.36.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 00:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-477700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 00:05:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 00:04:52 +0000   Thu, 04 Sep 2025 00:03:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 00:04:52 +0000   Thu, 04 Sep 2025 00:03:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 00:04:52 +0000   Thu, 04 Sep 2025 00:03:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 00:04:52 +0000   Thu, 04 Sep 2025 00:03:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.125.181
	  Hostname:    multinode-477700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 2cb4e8da4e5f40b5ae3bbefd796b0f2b
	  System UUID:                49db6bbd-c81b-9347-add1-c7b46a2fd100
	  Boot ID:                    1773922c-d391-4d37-ad4f-1f4bce675cd2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vpdc8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kindnet-ljv6w               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      111s
	  kube-system                 kube-proxy-lnh8p            0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  RegisteredNode           111s                 node-controller  Node multinode-477700-m02 event: Registered Node multinode-477700-m02 in Controller
	  Normal  NodeHasSufficientMemory  111s (x3 over 111s)  kubelet          Node multinode-477700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x3 over 111s)  kubelet          Node multinode-477700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x3 over 111s)  kubelet          Node multinode-477700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                78s                  kubelet          Node multinode-477700-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 3 23:57] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +0.003109] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000001] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001588] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +1.008334] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[Sep 3 23:58] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002486] (rpcbind)[114]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.581800] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 3 23:59] kauditd_printk_skb: 96 callbacks suppressed
	[  +0.191992] kauditd_printk_skb: 237 callbacks suppressed
	[  +0.162100] kauditd_printk_skb: 193 callbacks suppressed
	[Sep 4 00:00] kauditd_printk_skb: 159 callbacks suppressed
	[  +0.804898] kauditd_printk_skb: 12 callbacks suppressed
	[  +9.162010] kauditd_printk_skb: 129 callbacks suppressed
	[ +13.854658] kauditd_printk_skb: 17 callbacks suppressed
	[Sep 4 00:03] hrtimer: interrupt took 1281809 ns
	[Sep 4 00:04] kauditd_printk_skb: 56 callbacks suppressed
	
	
	==> etcd [0545be46c0c9] <==
	{"level":"warn","ts":"2025-09-04T00:00:01.307012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:00:01.323661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:00:01.330885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:00:01.345718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:00:01.372862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:00:01.379171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:00:01.393444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:00:01.408460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:00:01.423959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:00:01.437097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:00:20.421845Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"504.945838ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-04T00:00:20.421922Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"510.497414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-477700\" limit:1 ","response":"range_response_count:1 size:4393"}
	{"level":"info","ts":"2025-09-04T00:00:20.421955Z","caller":"traceutil/trace.go:172","msg":"trace[843290500] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:393; }","duration":"505.088143ms","start":"2025-09-04T00:00:19.916853Z","end":"2025-09-04T00:00:20.421941Z","steps":["trace[843290500] 'range keys from in-memory index tree'  (duration: 504.880537ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T00:00:20.421959Z","caller":"traceutil/trace.go:172","msg":"trace[1462842717] range","detail":"{range_begin:/registry/minions/multinode-477700; range_end:; response_count:1; response_revision:393; }","duration":"510.539615ms","start":"2025-09-04T00:00:19.911411Z","end":"2025-09-04T00:00:20.421951Z","steps":["trace[1462842717] 'range keys from in-memory index tree'  (duration: 510.321909ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T00:00:20.421987Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-04T00:00:19.911392Z","time spent":"510.587917ms","remote":"127.0.0.1:41872","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":4416,"request content":"key:\"/registry/minions/multinode-477700\" limit:1 "}
	{"level":"warn","ts":"2025-09-04T00:00:20.516516Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"249.353309ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11979180361539113674 > lease_revoke:<id:263e991205dfd101>","response":"size:28"}
	{"level":"info","ts":"2025-09-04T00:00:20.516657Z","caller":"traceutil/trace.go:172","msg":"trace[2120070819] linearizableReadLoop","detail":"{readStateIndex:408; appliedIndex:407; }","duration":"198.352011ms","start":"2025-09-04T00:00:20.318293Z","end":"2025-09-04T00:00:20.516645Z","steps":["trace[2120070819] 'read index received'  (duration: 91.202µs)","trace[2120070819] 'applied index is now lower than readState.Index'  (duration: 198.259909ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-04T00:00:20.516781Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.476716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T00:00:20.516901Z","caller":"traceutil/trace.go:172","msg":"trace[951894700] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"198.60262ms","start":"2025-09-04T00:00:20.318289Z","end":"2025-09-04T00:00:20.516891Z","steps":["trace[951894700] 'agreement among raft nodes before linearized reading'  (duration: 198.443815ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T00:01:18.068930Z","caller":"traceutil/trace.go:172","msg":"trace[983988914] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"101.206705ms","start":"2025-09-04T00:01:17.967709Z","end":"2025-09-04T00:01:18.068916Z","steps":["trace[983988914] 'process raft request'  (duration: 101.117706ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-04T00:03:13.286401Z","caller":"traceutil/trace.go:172","msg":"trace[777067811] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"104.134266ms","start":"2025-09-04T00:03:13.182248Z","end":"2025-09-04T00:03:13.286383Z","steps":["trace[777067811] 'process raft request'  (duration: 103.950565ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T00:03:37.720199Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.231249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-477700-m02\" limit:1 ","response":"range_response_count:1 size:2987"}
	{"level":"info","ts":"2025-09-04T00:03:37.720262Z","caller":"traceutil/trace.go:172","msg":"trace[1677085871] range","detail":"{range_begin:/registry/minions/multinode-477700-m02; range_end:; response_count:1; response_revision:617; }","duration":"122.304949ms","start":"2025-09-04T00:03:37.597947Z","end":"2025-09-04T00:03:37.720251Z","steps":["trace[1677085871] 'range keys from in-memory index tree'  (duration: 122.113048ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-04T00:03:37.720386Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.381261ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-04T00:03:37.720454Z","caller":"traceutil/trace.go:172","msg":"trace[1757120171] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:617; }","duration":"100.445662ms","start":"2025-09-04T00:03:37.619984Z","end":"2025-09-04T00:03:37.720430Z","steps":["trace[1757120171] 'range keys from in-memory index tree'  (duration: 100.351261ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:05:11 up 7 min,  0 users,  load average: 0.30, 0.35, 0.19
	Linux multinode-477700 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [3dd1de246060] <==
	I0904 00:04:01.752700       1 main.go:301] handling current node
	I0904 00:04:11.748648       1 main.go:297] Handling node with IPs: map[172.25.126.63:{}]
	I0904 00:04:11.748744       1 main.go:301] handling current node
	I0904 00:04:11.748765       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:04:11.748772       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:04:21.748160       1 main.go:297] Handling node with IPs: map[172.25.126.63:{}]
	I0904 00:04:21.748261       1 main.go:301] handling current node
	I0904 00:04:21.748280       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:04:21.748287       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:04:31.753476       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:04:31.753630       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:04:31.753895       1 main.go:297] Handling node with IPs: map[172.25.126.63:{}]
	I0904 00:04:31.753936       1 main.go:301] handling current node
	I0904 00:04:41.748766       1 main.go:297] Handling node with IPs: map[172.25.126.63:{}]
	I0904 00:04:41.749550       1 main.go:301] handling current node
	I0904 00:04:41.749576       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:04:41.749687       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:04:51.755467       1 main.go:297] Handling node with IPs: map[172.25.126.63:{}]
	I0904 00:04:51.755559       1 main.go:301] handling current node
	I0904 00:04:51.755579       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:04:51.755586       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:05:01.755557       1 main.go:297] Handling node with IPs: map[172.25.126.63:{}]
	I0904 00:05:01.755667       1 main.go:301] handling current node
	I0904 00:05:01.755687       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:05:01.755694       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [774d3869c70e] <==
	I0904 00:00:05.376188       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0904 00:00:05.794781       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0904 00:00:05.835355       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0904 00:00:05.853114       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0904 00:00:11.183114       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0904 00:00:11.419400       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 00:00:11.468226       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 00:00:11.495987       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0904 00:01:13.284356       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 00:01:26.322653       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 00:02:14.804162       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 00:02:54.666751       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 00:03:42.160430       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 00:04:16.803540       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0904 00:04:26.552694       1 conn.go:339] Error on socket receive: read tcp 172.25.126.63:8443->172.25.112.1:60873: use of closed network connection
	E0904 00:04:27.051192       1 conn.go:339] Error on socket receive: read tcp 172.25.126.63:8443->172.25.112.1:60875: use of closed network connection
	E0904 00:04:27.667911       1 conn.go:339] Error on socket receive: read tcp 172.25.126.63:8443->172.25.112.1:60877: use of closed network connection
	E0904 00:04:28.214824       1 conn.go:339] Error on socket receive: read tcp 172.25.126.63:8443->172.25.112.1:60879: use of closed network connection
	E0904 00:04:28.755565       1 conn.go:339] Error on socket receive: read tcp 172.25.126.63:8443->172.25.112.1:60881: use of closed network connection
	E0904 00:04:29.311920       1 conn.go:339] Error on socket receive: read tcp 172.25.126.63:8443->172.25.112.1:60883: use of closed network connection
	E0904 00:04:30.309347       1 conn.go:339] Error on socket receive: read tcp 172.25.126.63:8443->172.25.112.1:60886: use of closed network connection
	E0904 00:04:40.829614       1 conn.go:339] Error on socket receive: read tcp 172.25.126.63:8443->172.25.112.1:60888: use of closed network connection
	E0904 00:04:41.317395       1 conn.go:339] Error on socket receive: read tcp 172.25.126.63:8443->172.25.112.1:60892: use of closed network connection
	E0904 00:04:51.814558       1 conn.go:339] Error on socket receive: read tcp 172.25.126.63:8443->172.25.112.1:60894: use of closed network connection
	I0904 00:04:54.144085       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [944ecb490268] <==
	I0904 00:00:10.425525       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0904 00:00:10.425764       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-477700"
	I0904 00:00:10.426105       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0904 00:00:10.427714       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0904 00:00:10.427929       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0904 00:00:10.429702       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0904 00:00:10.430393       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0904 00:00:10.429760       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0904 00:00:10.431773       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0904 00:00:10.432101       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-477700" podCIDRs=["10.244.0.0/24"]
	I0904 00:00:10.433389       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 00:00:10.433394       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0904 00:00:10.433454       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0904 00:00:10.439642       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0904 00:00:10.441791       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 00:00:10.447313       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 00:00:10.449078       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0904 00:00:10.473317       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 00:00:10.473409       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0904 00:00:10.473418       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 00:00:35.432314       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0904 00:03:20.421389       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-477700-m02\" does not exist"
	I0904 00:03:20.468745       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-477700-m02"
	I0904 00:03:20.470117       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-477700-m02" podCIDRs=["10.244.1.0/24"]
	I0904 00:03:53.619658       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	
	
	==> kube-proxy [a5c4aad9ef6f] <==
	I0904 00:00:12.868323       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 00:00:12.971779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 00:00:12.972017       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.25.126.63"]
	E0904 00:00:12.972390       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 00:00:13.076726       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0904 00:00:13.076871       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 00:00:13.076904       1 server_linux.go:132] "Using iptables Proxier"
	I0904 00:00:13.095558       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 00:00:13.096639       1 server.go:527] "Version info" version="v1.34.0"
	I0904 00:00:13.096932       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 00:00:13.104702       1 config.go:309] "Starting node config controller"
	I0904 00:00:13.104967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 00:00:13.105246       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 00:00:13.106085       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 00:00:13.106209       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 00:00:13.106393       1 config.go:200] "Starting service config controller"
	I0904 00:00:13.106402       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 00:00:13.106417       1 config.go:106] "Starting endpoint slice config controller"
	I0904 00:00:13.106489       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 00:00:13.207252       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 00:00:13.207298       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 00:00:13.207304       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2b011dd581a4] <==
	E0904 00:00:02.444845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 00:00:02.447280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 00:00:02.447447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 00:00:02.447737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 00:00:02.448007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 00:00:03.267263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 00:00:03.306674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 00:00:03.471678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 00:00:03.523287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 00:00:03.602845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 00:00:03.633875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0904 00:00:03.679977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0904 00:00:03.696864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 00:00:03.752294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 00:00:03.762244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 00:00:03.778242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 00:00:03.808584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 00:00:03.826459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 00:00:03.831716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 00:00:03.939748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 00:00:03.965874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 00:00:03.985838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 00:00:04.013641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 00:00:04.046466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I0904 00:00:06.221474       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 00:00:10 multinode-477700 kubelet[2796]: I0904 00:00:10.483133    2796 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 04 00:00:10 multinode-477700 kubelet[2796]: I0904 00:00:10.484918    2796 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 04 00:00:11 multinode-477700 kubelet[2796]: I0904 00:00:11.379961    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2af7872d-5ba2-4df0-89ef-eb2c46ddd319-cni-cfg\") pod \"kindnet-gdpss\" (UID: \"2af7872d-5ba2-4df0-89ef-eb2c46ddd319\") " pod="kube-system/kindnet-gdpss"
	Sep 04 00:00:11 multinode-477700 kubelet[2796]: I0904 00:00:11.380124    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2af7872d-5ba2-4df0-89ef-eb2c46ddd319-xtables-lock\") pod \"kindnet-gdpss\" (UID: \"2af7872d-5ba2-4df0-89ef-eb2c46ddd319\") " pod="kube-system/kindnet-gdpss"
	Sep 04 00:00:11 multinode-477700 kubelet[2796]: I0904 00:00:11.380278    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4cww\" (UniqueName: \"kubernetes.io/projected/2af7872d-5ba2-4df0-89ef-eb2c46ddd319-kube-api-access-c4cww\") pod \"kindnet-gdpss\" (UID: \"2af7872d-5ba2-4df0-89ef-eb2c46ddd319\") " pod="kube-system/kindnet-gdpss"
	Sep 04 00:00:11 multinode-477700 kubelet[2796]: I0904 00:00:11.380323    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2af7872d-5ba2-4df0-89ef-eb2c46ddd319-lib-modules\") pod \"kindnet-gdpss\" (UID: \"2af7872d-5ba2-4df0-89ef-eb2c46ddd319\") " pod="kube-system/kindnet-gdpss"
	Sep 04 00:00:11 multinode-477700 kubelet[2796]: I0904 00:00:11.481545    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e72957a-51b3-4f18-876a-32d17f1fcb01-kube-proxy\") pod \"kube-proxy-v9bfx\" (UID: \"2e72957a-51b3-4f18-876a-32d17f1fcb01\") " pod="kube-system/kube-proxy-v9bfx"
	Sep 04 00:00:11 multinode-477700 kubelet[2796]: I0904 00:00:11.481609    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e72957a-51b3-4f18-876a-32d17f1fcb01-xtables-lock\") pod \"kube-proxy-v9bfx\" (UID: \"2e72957a-51b3-4f18-876a-32d17f1fcb01\") " pod="kube-system/kube-proxy-v9bfx"
	Sep 04 00:00:11 multinode-477700 kubelet[2796]: I0904 00:00:11.481717    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e72957a-51b3-4f18-876a-32d17f1fcb01-lib-modules\") pod \"kube-proxy-v9bfx\" (UID: \"2e72957a-51b3-4f18-876a-32d17f1fcb01\") " pod="kube-system/kube-proxy-v9bfx"
	Sep 04 00:00:11 multinode-477700 kubelet[2796]: I0904 00:00:11.481761    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz7bv\" (UniqueName: \"kubernetes.io/projected/2e72957a-51b3-4f18-876a-32d17f1fcb01-kube-api-access-wz7bv\") pod \"kube-proxy-v9bfx\" (UID: \"2e72957a-51b3-4f18-876a-32d17f1fcb01\") " pod="kube-system/kube-proxy-v9bfx"
	Sep 04 00:00:12 multinode-477700 kubelet[2796]: I0904 00:00:12.291589    2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71185e7e5e3a73f7b5bc3c54f0b78f0e94d529f469820027530534bdce11aa06"
	Sep 04 00:00:12 multinode-477700 kubelet[2796]: I0904 00:00:12.822232    2796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c1d437a10c4c6b22fc35984bdce198fa734fffdc46629cab5f947ab23fc4330"
	Sep 04 00:00:13 multinode-477700 kubelet[2796]: I0904 00:00:13.976520    2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v9bfx" podStartSLOduration=2.976494219 podStartE2EDuration="2.976494219s" podCreationTimestamp="2025-09-04 00:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:00:13.88288665 +0000 UTC m=+8.130160025" watchObservedRunningTime="2025-09-04 00:00:13.976494219 +0000 UTC m=+8.223767594"
	Sep 04 00:00:22 multinode-477700 kubelet[2796]: I0904 00:00:22.083256    2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gdpss" podStartSLOduration=4.210351639 podStartE2EDuration="11.083237878s" podCreationTimestamp="2025-09-04 00:00:11 +0000 UTC" firstStartedPulling="2025-09-04 00:00:12.827735354 +0000 UTC m=+7.075008629" lastFinishedPulling="2025-09-04 00:00:19.700621493 +0000 UTC m=+13.947894868" observedRunningTime="2025-09-04 00:00:22.080941807 +0000 UTC m=+16.328215182" watchObservedRunningTime="2025-09-04 00:00:22.083237878 +0000 UTC m=+16.330511153"
	Sep 04 00:00:34 multinode-477700 kubelet[2796]: I0904 00:00:34.010007    2796 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Sep 04 00:00:34 multinode-477700 kubelet[2796]: I0904 00:00:34.098933    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lwj\" (UniqueName: \"kubernetes.io/projected/39d4fb7b-1473-4a4e-9fb1-ce058a1c4904-kube-api-access-w9lwj\") pod \"coredns-66bc5c9577-mg9nc\" (UID: \"39d4fb7b-1473-4a4e-9fb1-ce058a1c4904\") " pod="kube-system/coredns-66bc5c9577-mg9nc"
	Sep 04 00:00:34 multinode-477700 kubelet[2796]: I0904 00:00:34.098981    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6ff776d2-685f-4111-bbe0-2d7f616fed2a-tmp\") pod \"storage-provisioner\" (UID: \"6ff776d2-685f-4111-bbe0-2d7f616fed2a\") " pod="kube-system/storage-provisioner"
	Sep 04 00:00:34 multinode-477700 kubelet[2796]: I0904 00:00:34.099005    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39d4fb7b-1473-4a4e-9fb1-ce058a1c4904-config-volume\") pod \"coredns-66bc5c9577-mg9nc\" (UID: \"39d4fb7b-1473-4a4e-9fb1-ce058a1c4904\") " pod="kube-system/coredns-66bc5c9577-mg9nc"
	Sep 04 00:00:34 multinode-477700 kubelet[2796]: I0904 00:00:34.099028    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhv92\" (UniqueName: \"kubernetes.io/projected/6ff776d2-685f-4111-bbe0-2d7f616fed2a-kube-api-access-zhv92\") pod \"storage-provisioner\" (UID: \"6ff776d2-685f-4111-bbe0-2d7f616fed2a\") " pod="kube-system/storage-provisioner"
	Sep 04 00:00:36 multinode-477700 kubelet[2796]: I0904 00:00:36.232767    2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.232752122 podStartE2EDuration="17.232752122s" podCreationTimestamp="2025-09-04 00:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:00:36.23163619 +0000 UTC m=+30.478909465" watchObservedRunningTime="2025-09-04 00:00:36.232752122 +0000 UTC m=+30.480025497"
	Sep 04 00:00:36 multinode-477700 kubelet[2796]: I0904 00:00:36.252986    2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mg9nc" podStartSLOduration=25.252973097 podStartE2EDuration="25.252973097s" podCreationTimestamp="2025-09-04 00:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 00:00:36.251465055 +0000 UTC m=+30.498738430" watchObservedRunningTime="2025-09-04 00:00:36.252973097 +0000 UTC m=+30.500246472"
	Sep 04 00:04:20 multinode-477700 kubelet[2796]: I0904 00:04:20.680158    2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9gr\" (UniqueName: \"kubernetes.io/projected/ac851b87-114b-409b-b27f-575f9243a270-kube-api-access-mp9gr\") pod \"busybox-7b57f96db7-bj95n\" (UID: \"ac851b87-114b-409b-b27f-575f9243a270\") " pod="default/busybox-7b57f96db7-bj95n"
	Sep 04 00:04:24 multinode-477700 kubelet[2796]: I0904 00:04:24.511271    2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7b57f96db7-bj95n" podStartSLOduration=2.617517598 podStartE2EDuration="4.51124992s" podCreationTimestamp="2025-09-04 00:04:20 +0000 UTC" firstStartedPulling="2025-09-04 00:04:21.379522435 +0000 UTC m=+255.626795710" lastFinishedPulling="2025-09-04 00:04:23.273254757 +0000 UTC m=+257.520528032" observedRunningTime="2025-09-04 00:04:24.510866615 +0000 UTC m=+258.758139990" watchObservedRunningTime="2025-09-04 00:04:24.51124992 +0000 UTC m=+258.758523295"
	Sep 04 00:04:26 multinode-477700 kubelet[2796]: E0904 00:04:26.555164    2796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48126->127.0.0.1:40545: write tcp 127.0.0.1:48126->127.0.0.1:40545: write: broken pipe
	Sep 04 00:04:28 multinode-477700 kubelet[2796]: E0904 00:04:28.756192    2796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48132->127.0.0.1:40545: write tcp 127.0.0.1:48132->127.0.0.1:40545: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-477700 -n multinode-477700
helpers_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-477700 -n multinode-477700: (11.8733025s)
helpers_test.go:269: (dbg) Run:  kubectl --context multinode-477700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (55.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (442.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-477700
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-477700
E0904 00:20:53.258976    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:21:07.874551    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-477700: (1m38.8035706s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-477700 --wait=true -v=5 --alsologtostderr
E0904 00:23:04.789422    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:25:53.262160    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-477700 --wait=true -v=5 --alsologtostderr: exit status 1 (5m8.9832853s)

                                                
                                                
-- stdout --
	* [multinode-477700] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-477700" primary control-plane node in "multinode-477700" cluster
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-477700-m02" worker node in "multinode-477700" cluster
	* Found network options:
	  - NO_PROXY=172.25.112.78
	  - NO_PROXY=172.25.112.78
	  - env NO_PROXY=172.25.112.78

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 00:21:53.830751   11080 out.go:360] Setting OutFile to fd 1044 ...
	I0904 00:21:53.908670   11080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 00:21:53.908670   11080 out.go:374] Setting ErrFile to fd 1244...
	I0904 00:21:53.908670   11080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 00:21:53.927252   11080 out.go:368] Setting JSON to false
	I0904 00:21:53.929759   11080 start.go:130] hostinfo: {"hostname":"minikube6","uptime":28419,"bootTime":1756916894,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0904 00:21:53.930734   11080 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0904 00:21:54.162920   11080 out.go:179] * [multinode-477700] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0904 00:21:54.209304   11080 notify.go:220] Checking for updates...
	I0904 00:21:54.222664   11080 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0904 00:21:54.270626   11080 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 00:21:54.315740   11080 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0904 00:21:54.328882   11080 out.go:179]   - MINIKUBE_LOCATION=21341
	I0904 00:21:54.356118   11080 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 00:21:54.364932   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:21:54.364932   11080 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 00:21:59.700221   11080 out.go:179] * Using the hyperv driver based on existing profile
	I0904 00:21:59.708846   11080 start.go:304] selected driver: hyperv
	I0904 00:21:59.708885   11080 start.go:918] validating driver "hyperv" against &{Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.126.63 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.125.123 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 00:21:59.708912   11080 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 00:21:59.765931   11080 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 00:21:59.765931   11080 cni.go:84] Creating CNI manager for ""
	I0904 00:21:59.766489   11080 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0904 00:21:59.766799   11080 start.go:348] cluster config:
	{Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.126.63 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.125.123 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 00:21:59.767173   11080 iso.go:125] acquiring lock: {Name:mk966bde02eeea119c68f0830e579f0a83ec9e11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 00:21:59.777512   11080 out.go:179] * Starting "multinode-477700" primary control-plane node in "multinode-477700" cluster
	I0904 00:21:59.783173   11080 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0904 00:21:59.783688   11080 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0904 00:21:59.783774   11080 cache.go:58] Caching tarball of preloaded images
	I0904 00:21:59.784037   11080 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0904 00:21:59.784200   11080 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0904 00:21:59.784701   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:21:59.787503   11080 start.go:360] acquireMachinesLock for multinode-477700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 00:21:59.787503   11080 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-477700"
	I0904 00:21:59.788152   11080 start.go:96] Skipping create...Using existing machine configuration
	I0904 00:21:59.788239   11080 fix.go:54] fixHost starting: 
	I0904 00:21:59.788996   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:02.453183   11080 main.go:141] libmachine: [stdout =====>] : Off
	
	I0904 00:22:02.454275   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:02.454448   11080 fix.go:112] recreateIfNeeded on multinode-477700: state=Stopped err=<nil>
	W0904 00:22:02.454514   11080 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 00:22:02.518803   11080 out.go:252] * Restarting existing hyperv VM for "multinode-477700" ...
	I0904 00:22:02.520495   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-477700
	I0904 00:22:05.617381   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:22:05.617381   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:05.617381   11080 main.go:141] libmachine: Waiting for host to start...
	I0904 00:22:05.617381   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:07.790361   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:07.790412   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:07.790532   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:10.288167   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:22:10.288167   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:11.289163   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:13.443324   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:13.443489   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:13.443489   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:15.930487   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:22:15.930765   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:16.930936   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:19.095594   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:19.096269   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:19.096269   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:21.563235   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:22:21.563235   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:22.565078   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:24.696931   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:24.696931   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:24.696931   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:27.193028   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:22:27.193650   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:28.193723   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:30.332788   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:30.332788   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:30.333826   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:32.846163   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:32.846163   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:32.849315   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:34.884654   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:34.885005   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:34.885005   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:37.354322   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:37.354322   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:37.355353   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:22:37.357440   11080 machine.go:93] provisionDockerMachine start ...
	I0904 00:22:37.358358   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:39.473855   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:39.474870   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:39.475020   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:41.907103   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:41.907134   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:41.913197   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:22:41.913880   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:22:41.913880   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 00:22:42.052582   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 00:22:42.052582   11080 buildroot.go:166] provisioning hostname "multinode-477700"
	I0904 00:22:42.052582   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:44.057552   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:44.057618   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:44.057990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:46.457273   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:46.457273   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:46.464964   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:22:46.465120   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:22:46.465120   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-477700 && echo "multinode-477700" | sudo tee /etc/hostname
	I0904 00:22:46.636909   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-477700
	
	I0904 00:22:46.636909   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:48.631517   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:48.631517   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:48.631939   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:51.091840   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:51.091840   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:51.100935   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:22:51.100935   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:22:51.100935   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-477700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-477700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-477700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 00:22:51.251091   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 00:22:51.251091   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0904 00:22:51.251091   11080 buildroot.go:174] setting up certificates
	I0904 00:22:51.251091   11080 provision.go:84] configureAuth start
	I0904 00:22:51.251091   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:53.317236   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:53.317236   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:53.318363   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:55.701933   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:55.702529   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:55.702529   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:57.731008   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:57.732150   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:57.732150   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:00.182342   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:00.182342   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:00.182539   11080 provision.go:143] copyHostCerts
	I0904 00:23:00.183049   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0904 00:23:00.183398   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0904 00:23:00.183519   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0904 00:23:00.184186   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0904 00:23:00.186001   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0904 00:23:00.186603   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0904 00:23:00.186603   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0904 00:23:00.187373   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0904 00:23:00.188962   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0904 00:23:00.189282   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0904 00:23:00.189282   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0904 00:23:00.190090   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0904 00:23:00.190937   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-477700 san=[127.0.0.1 172.25.112.78 localhost minikube multinode-477700]
	I0904 00:23:00.423594   11080 provision.go:177] copyRemoteCerts
	I0904 00:23:00.436508   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 00:23:00.436508   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:02.454944   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:02.454944   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:02.455412   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:04.864600   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:04.864765   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:04.865309   11080 sshutil.go:53] new ssh client: &{IP:172.25.112.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:23:04.984999   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5483161s)
	I0904 00:23:04.985103   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0904 00:23:04.985524   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 00:23:05.037788   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0904 00:23:05.037788   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0904 00:23:05.090714   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0904 00:23:05.090714   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 00:23:05.142464   11080 provision.go:87] duration metric: took 13.8911842s to configureAuth
	I0904 00:23:05.142464   11080 buildroot.go:189] setting minikube options for container-runtime
	I0904 00:23:05.143213   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:23:05.143302   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:07.103929   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:07.103929   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:07.103929   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:09.546365   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:09.546365   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:09.552997   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:23:09.553312   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:23:09.553312   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0904 00:23:09.699438   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0904 00:23:09.700005   11080 buildroot.go:70] root file system type: tmpfs
	I0904 00:23:09.700199   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0904 00:23:09.700199   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:11.723292   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:11.723292   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:11.723503   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:14.195537   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:14.196466   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:14.202735   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:23:14.203606   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:23:14.203606   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0904 00:23:14.373927   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0904 00:23:14.374149   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:16.365404   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:16.366574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:16.366627   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:18.868457   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:18.869138   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:18.875760   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:23:18.876589   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:23:18.876589   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0904 00:23:20.602448   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0904 00:23:20.602448   11080 machine.go:96] duration metric: took 43.2444198s to provisionDockerMachine
	I0904 00:23:20.602448   11080 start.go:293] postStartSetup for "multinode-477700" (driver="hyperv")
	I0904 00:23:20.602448   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 00:23:20.613439   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 00:23:20.613439   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:22.777800   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:22.778267   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:22.778267   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:25.274825   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:25.275971   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:25.276422   11080 sshutil.go:53] new ssh client: &{IP:172.25.112.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:23:25.391333   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7778292s)
	I0904 00:23:25.405582   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 00:23:25.413257   11080 info.go:137] Remote host: Buildroot 2025.02
	I0904 00:23:25.413257   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0904 00:23:25.413257   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0904 00:23:25.414614   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0904 00:23:25.414706   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0904 00:23:25.426889   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 00:23:25.447418   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0904 00:23:25.500007   11080 start.go:296] duration metric: took 4.8974931s for postStartSetup
	I0904 00:23:25.500007   11080 fix.go:56] duration metric: took 1m25.7106027s for fixHost
	I0904 00:23:25.500007   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:27.535495   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:27.536344   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:27.536344   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:29.973646   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:29.973646   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:29.979368   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:23:29.979567   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:23:29.979567   11080 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 00:23:30.114942   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756945410.135584876
	
	I0904 00:23:30.115136   11080 fix.go:216] guest clock: 1756945410.135584876
	I0904 00:23:30.115136   11080 fix.go:229] Guest: 2025-09-04 00:23:30.135584876 +0000 UTC Remote: 2025-09-04 00:23:25.5000078 +0000 UTC m=+91.763115801 (delta=4.635577076s)
	I0904 00:23:30.115270   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:32.167447   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:32.167503   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:32.167503   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:34.633958   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:34.634420   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:34.640556   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:23:34.641189   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:23:34.641189   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756945410
	I0904 00:23:34.790849   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Sep  4 00:23:30 UTC 2025
	
	I0904 00:23:34.790849   11080 fix.go:236] clock set: Thu Sep  4 00:23:30 UTC 2025
	 (err=<nil>)
	I0904 00:23:34.790849   11080 start.go:83] releasing machines lock for "multinode-477700", held for 1m35.0020545s
	I0904 00:23:34.791789   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:36.800343   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:36.800343   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:36.801010   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:39.293977   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:39.293977   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:39.299358   11080 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0904 00:23:39.299596   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:39.317231   11080 ssh_runner.go:195] Run: cat /version.json
	I0904 00:23:39.317499   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:41.458180   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:41.458180   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:41.458180   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:41.458180   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:41.458180   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:41.459162   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:44.105052   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:44.105052   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:44.105356   11080 sshutil.go:53] new ssh client: &{IP:172.25.112.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:23:44.128684   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:44.128684   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:44.129695   11080 sshutil.go:53] new ssh client: &{IP:172.25.112.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:23:44.198467   11080 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8989559s)
	W0904 00:23:44.198598   11080 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0904 00:23:44.231284   11080 ssh_runner.go:235] Completed: cat /version.json: (4.9139556s)
	I0904 00:23:44.246092   11080 ssh_runner.go:195] Run: systemctl --version
	I0904 00:23:44.270631   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 00:23:44.281650   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 00:23:44.294336   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 00:23:44.328601   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 00:23:44.328601   11080 start.go:495] detecting cgroup driver to use...
	I0904 00:23:44.329005   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0904 00:23:44.339070   11080 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0904 00:23:44.339168   11080 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0904 00:23:44.385278   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0904 00:23:44.421750   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0904 00:23:44.447137   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 00:23:44.462394   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 00:23:44.503962   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 00:23:44.543202   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 00:23:44.577063   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 00:23:44.612905   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 00:23:44.661199   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 00:23:44.694723   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 00:23:44.726341   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 00:23:44.756868   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 00:23:44.776495   11080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 00:23:44.787538   11080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 00:23:44.820594   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 00:23:44.849743   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:23:45.075413   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 00:23:45.136540   11080 start.go:495] detecting cgroup driver to use...
	I0904 00:23:45.146840   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0904 00:23:45.182265   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 00:23:45.213362   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 00:23:45.256226   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 00:23:45.296714   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 00:23:45.333262   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 00:23:45.396624   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 00:23:45.420897   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 00:23:45.468280   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0904 00:23:45.485286   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0904 00:23:45.506287   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0904 00:23:45.556267   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0904 00:23:45.790523   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0904 00:23:46.010920   11080 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0904 00:23:46.010920   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0904 00:23:46.064519   11080 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0904 00:23:46.099172   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:23:46.323229   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 00:23:47.147448   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 00:23:47.181995   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0904 00:23:47.223294   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 00:23:47.256810   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0904 00:23:47.478989   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0904 00:23:47.719263   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:23:47.960095   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0904 00:23:48.025713   11080 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0904 00:23:48.067607   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:23:48.317522   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0904 00:23:48.499541   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 00:23:48.527119   11080 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0904 00:23:48.540422   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0904 00:23:48.550993   11080 start.go:563] Will wait 60s for crictl version
	I0904 00:23:48.563274   11080 ssh_runner.go:195] Run: which crictl
	I0904 00:23:48.582634   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 00:23:48.651628   11080 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0904 00:23:48.661911   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 00:23:48.708642   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 00:23:48.750835   11080 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0904 00:23:48.750835   11080 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0904 00:23:48.754823   11080 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0904 00:23:48.754823   11080 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0904 00:23:48.754823   11080 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0904 00:23:48.754823   11080 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0904 00:23:48.757821   11080 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0904 00:23:48.757821   11080 ip.go:218] interface addr: 172.25.112.1/20
	I0904 00:23:48.769848   11080 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0904 00:23:48.777171   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 00:23:48.801144   11080 kubeadm.go:875] updating cluster {Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.112.78 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.125.123 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 00:23:48.801684   11080 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0904 00:23:48.811116   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 00:23:48.838421   11080 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0904 00:23:48.838421   11080 docker.go:621] Images already preloaded, skipping extraction
	I0904 00:23:48.847580   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 00:23:48.872099   11080 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0904 00:23:48.872161   11080 cache_images.go:85] Images are preloaded, skipping loading
	I0904 00:23:48.872161   11080 kubeadm.go:926] updating node { 172.25.112.78 8443 v1.34.0 docker true true} ...
	I0904 00:23:48.872666   11080 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-477700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.112.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 00:23:48.883194   11080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0904 00:23:48.948011   11080 cni.go:84] Creating CNI manager for ""
	I0904 00:23:48.948011   11080 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0904 00:23:48.948011   11080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 00:23:48.948011   11080 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.112.78 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-477700 NodeName:multinode-477700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.112.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.112.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 00:23:48.949014   11080 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.112.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-477700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.112.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.112.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 00:23:48.961035   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 00:23:48.983839   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 00:23:48.999536   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 00:23:49.020511   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0904 00:23:49.070991   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 00:23:49.102645   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I0904 00:23:49.149363   11080 ssh_runner.go:195] Run: grep 172.25.112.78	control-plane.minikube.internal$ /etc/hosts
	I0904 00:23:49.156943   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.112.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 00:23:49.206124   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:23:49.440490   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 00:23:49.493522   11080 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700 for IP: 172.25.112.78
	I0904 00:23:49.493522   11080 certs.go:194] generating shared ca certs ...
	I0904 00:23:49.493522   11080 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:23:49.493522   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0904 00:23:49.494502   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0904 00:23:49.494502   11080 certs.go:256] generating profile certs ...
	I0904 00:23:49.495506   11080 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\client.key
	I0904 00:23:49.495506   11080 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.dbd75d0e
	I0904 00:23:49.495506   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.dbd75d0e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.112.78]
	I0904 00:23:49.960723   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.dbd75d0e ...
	I0904 00:23:49.960723   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.dbd75d0e: {Name:mkc4978833b15a71f00486612ea48025cdf11766 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:23:49.962716   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.dbd75d0e ...
	I0904 00:23:49.962716   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.dbd75d0e: {Name:mkb61b9562aa8169185c7c3992ef11de4fa7a300 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:23:49.963461   11080 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.dbd75d0e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt
	I0904 00:23:49.979350   11080 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.dbd75d0e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key
	I0904 00:23:49.980836   11080 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key
	I0904 00:23:49.980836   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0904 00:23:49.981127   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0904 00:23:49.981347   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0904 00:23:49.981347   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0904 00:23:49.982444   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0904 00:23:49.982638   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0904 00:23:49.982837   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0904 00:23:49.983035   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0904 00:23:49.983236   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0904 00:23:49.983956   11080 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0904 00:23:49.983956   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0904 00:23:49.983956   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0904 00:23:49.984738   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0904 00:23:49.985306   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0904 00:23:49.985639   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0904 00:23:49.985639   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0904 00:23:49.986286   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0904 00:23:49.986669   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:23:49.987978   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 00:23:50.053485   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 00:23:50.110699   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 00:23:50.162718   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 00:23:50.213955   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0904 00:23:50.262310   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 00:23:50.311608   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 00:23:50.359763   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 00:23:50.408738   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0904 00:23:50.459603   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0904 00:23:50.500939   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 00:23:50.545159   11080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 00:23:50.589999   11080 ssh_runner.go:195] Run: openssl version
	I0904 00:23:50.615208   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 00:23:50.648532   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:23:50.659253   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:23:50.670810   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:23:50.695008   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 00:23:50.728954   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0904 00:23:50.771184   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0904 00:23:50.779825   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0904 00:23:50.789888   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0904 00:23:50.812361   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0904 00:23:50.846158   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0904 00:23:50.877425   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0904 00:23:50.884365   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0904 00:23:50.896334   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0904 00:23:50.915820   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 00:23:50.947513   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 00:23:50.967845   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 00:23:50.989518   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 00:23:51.012345   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 00:23:51.037223   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 00:23:51.059945   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 00:23:51.086041   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 00:23:51.098169   11080 kubeadm.go:392] StartCluster: {Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
4.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.112.78 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.125.123 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 00:23:51.107832   11080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 00:23:51.147805   11080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 00:23:51.174384   11080 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 00:23:51.174519   11080 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0904 00:23:51.187760   11080 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 00:23:51.213778   11080 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 00:23:51.215223   11080 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-477700" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0904 00:23:51.215745   11080 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-477700" cluster setting kubeconfig missing "multinode-477700" context setting]
	I0904 00:23:51.216507   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:23:51.234761   11080 kapi.go:59] client config for multinode-477700: &rest.Config{Host:"https://172.25.112.78:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 00:23:51.236785   11080 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0904 00:23:51.236785   11080 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0904 00:23:51.236785   11080 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0904 00:23:51.236785   11080 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0904 00:23:51.236785   11080 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0904 00:23:51.236785   11080 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0904 00:23:51.248760   11080 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 00:23:51.267760   11080 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.25.126.63
	+  advertiseAddress: 172.25.112.78
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-477700"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.25.126.63"
	+      value: "172.25.112.78"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.25.126.63"]
	+  certSANs: ["127.0.0.1", "localhost", "172.25.112.78"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0904 00:23:51.267760   11080 kubeadm.go:1152] stopping kube-system containers ...
	I0904 00:23:51.275748   11080 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 00:23:51.305359   11080 docker.go:484] Stopping containers: [89b7640b7697 cd3b66b73cb4 7ec79c04c516 882d6e338723 3dd1de246060 a5c4aad9ef6f 71185e7e5e3a 4c1d437a10c4 0545be46c0c9 944ecb490268 2b011dd581a4 774d3869c70e 8b34bc6a82c9 e2706c7084c7 9b5837c04c52 be2ad3b809d0]
	I0904 00:23:51.314731   11080 ssh_runner.go:195] Run: docker stop 89b7640b7697 cd3b66b73cb4 7ec79c04c516 882d6e338723 3dd1de246060 a5c4aad9ef6f 71185e7e5e3a 4c1d437a10c4 0545be46c0c9 944ecb490268 2b011dd581a4 774d3869c70e 8b34bc6a82c9 e2706c7084c7 9b5837c04c52 be2ad3b809d0
	I0904 00:23:51.368330   11080 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0904 00:23:51.411099   11080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 00:23:51.432015   11080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 00:23:51.432015   11080 kubeadm.go:157] found existing configuration files:
	
	I0904 00:23:51.443005   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 00:23:51.463008   11080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 00:23:51.474014   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 00:23:51.505120   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 00:23:51.523000   11080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 00:23:51.534862   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 00:23:51.565590   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 00:23:51.583073   11080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 00:23:51.595160   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 00:23:51.625086   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 00:23:51.641069   11080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 00:23:51.653073   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 00:23:51.686003   11080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 00:23:51.704982   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:23:52.021787   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:23:54.223104   11080 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.2012122s)
	I0904 00:23:54.223196   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:23:54.578445   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:23:54.660171   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:23:54.781389   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0904 00:23:54.795755   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:23:55.295777   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:23:55.795594   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:23:56.296041   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:23:56.794673   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:23:56.827636   11080 api_server.go:72] duration metric: took 2.0461631s to wait for apiserver process to appear ...
	I0904 00:23:56.827636   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0904 00:23:56.827636   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:01.049057   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0904 00:24:01.049057   11080 api_server.go:103] status: https://172.25.112.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0904 00:24:01.050051   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:01.247152   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 00:24:01.247474   11080 api_server.go:103] status: https://172.25.112.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 00:24:01.328774   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:01.344034   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 00:24:01.344118   11080 api_server.go:103] status: https://172.25.112.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 00:24:01.828435   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:01.837035   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 00:24:01.837035   11080 api_server.go:103] status: https://172.25.112.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 00:24:02.328357   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:02.338095   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 00:24:02.338095   11080 api_server.go:103] status: https://172.25.112.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 00:24:02.828160   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:02.836449   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 200:
	ok
	I0904 00:24:02.856741   11080 api_server.go:141] control plane version: v1.34.0
	I0904 00:24:02.856741   11080 api_server.go:131] duration metric: took 6.0290238s to wait for apiserver health ...
	I0904 00:24:02.856741   11080 cni.go:84] Creating CNI manager for ""
	I0904 00:24:02.856813   11080 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0904 00:24:02.861636   11080 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0904 00:24:02.880273   11080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 00:24:02.896192   11080 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 00:24:02.896192   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 00:24:03.036013   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 00:24:04.760395   11080 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.7243588s)
	I0904 00:24:04.760506   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 00:24:04.793520   11080 system_pods.go:59] 12 kube-system pods found
	I0904 00:24:04.793520   11080 system_pods.go:61] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:24:04.793520   11080 system_pods.go:61] "etcd-multinode-477700" [f619f984-ced6-403e-bd87-70c0ad7b008d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 00:24:04.793520   11080 system_pods.go:61] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0904 00:24:04.793520   11080 system_pods.go:61] "kindnet-gj9bp" [d46acd35-8083-498f-805b-ca4a3cf9ee14] Running
	I0904 00:24:04.793520   11080 system_pods.go:61] "kindnet-ljv6w" [70d4500f-98bf-4e06-a7e6-b7e219dcb428] Running
	I0904 00:24:04.793520   11080 system_pods.go:61] "kube-apiserver-multinode-477700" [dbbb2637-2c76-4436-aeaf-47e07cf0b8cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 00:24:04.793520   11080 system_pods.go:61] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 00:24:04.793520   11080 system_pods.go:61] "kube-proxy-lnh8p" [16cf2fb9-db73-4972-a48b-e5492d3bd79f] Running
	I0904 00:24:04.793520   11080 system_pods.go:61] "kube-proxy-rbxm9" [cf3a297f-0ef0-418b-ba87-3f2966bba73e] Running
	I0904 00:24:04.794138   11080 system_pods.go:61] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 00:24:04.794138   11080 system_pods.go:61] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 00:24:04.794138   11080 system_pods.go:61] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Running
	I0904 00:24:04.794138   11080 system_pods.go:74] duration metric: took 33.6314ms to wait for pod list to return data ...
	I0904 00:24:04.794225   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0904 00:24:04.811537   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:04.811639   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:04.811708   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:04.811708   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:04.811784   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:04.811784   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:04.811850   11080 node_conditions.go:105] duration metric: took 17.6249ms to run NodePressure ...
	I0904 00:24:04.811935   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:24:05.447591   11080 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0904 00:24:05.453888   11080 kubeadm.go:735] kubelet initialised
	I0904 00:24:05.453995   11080 kubeadm.go:736] duration metric: took 6.4043ms waiting for restarted kubelet to initialise ...
	I0904 00:24:05.453995   11080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 00:24:05.484523   11080 ops.go:34] apiserver oom_adj: -16
	I0904 00:24:05.484523   11080 kubeadm.go:593] duration metric: took 14.3098102s to restartPrimaryControlPlane
	I0904 00:24:05.484523   11080 kubeadm.go:394] duration metric: took 14.386159s to StartCluster
	I0904 00:24:05.484523   11080 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:24:05.484523   11080 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0904 00:24:05.487385   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:24:05.488591   11080 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.112.78 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 00:24:05.488591   11080 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 00:24:05.489154   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:24:05.493519   11080 out.go:179] * Verifying Kubernetes components...
	I0904 00:24:05.497772   11080 out.go:179] * Enabled addons: 
	I0904 00:24:05.502489   11080 addons.go:514] duration metric: took 13.8972ms for enable addons: enabled=[]
	I0904 00:24:05.512987   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:24:05.899735   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 00:24:05.933828   11080 node_ready.go:35] waiting up to 6m0s for node "multinode-477700" to be "Ready" ...
	W0904 00:24:07.941319   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:10.442659   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:12.939126   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:14.940399   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:17.441390   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:19.940942   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:22.440638   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:24.943153   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:27.439415   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:29.440361   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:31.940178   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:34.438539   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:36.947640   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:39.440875   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:41.939386   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:44.440068   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:46.443378   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	I0904 00:24:48.939306   11080 node_ready.go:49] node "multinode-477700" is "Ready"
	I0904 00:24:48.939475   11080 node_ready.go:38] duration metric: took 43.0049851s for node "multinode-477700" to be "Ready" ...
	I0904 00:24:48.939475   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0904 00:24:48.951580   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:24:48.992051   11080 api_server.go:72] duration metric: took 43.5028718s to wait for apiserver process to appear ...
	I0904 00:24:48.992051   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0904 00:24:48.992051   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:49.001309   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 200:
	ok
	I0904 00:24:49.002842   11080 api_server.go:141] control plane version: v1.34.0
	I0904 00:24:49.002943   11080 api_server.go:131] duration metric: took 10.8921ms to wait for apiserver health ...
	I0904 00:24:49.002943   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 00:24:49.009997   11080 system_pods.go:59] 12 kube-system pods found
	I0904 00:24:49.010043   11080 system_pods.go:61] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:24:49.010043   11080 system_pods.go:61] "etcd-multinode-477700" [f619f984-ced6-403e-bd87-70c0ad7b008d] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kindnet-gj9bp" [d46acd35-8083-498f-805b-ca4a3cf9ee14] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kindnet-ljv6w" [70d4500f-98bf-4e06-a7e6-b7e219dcb428] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-apiserver-multinode-477700" [dbbb2637-2c76-4436-aeaf-47e07cf0b8cb] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-proxy-lnh8p" [16cf2fb9-db73-4972-a48b-e5492d3bd79f] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-proxy-rbxm9" [cf3a297f-0ef0-418b-ba87-3f2966bba73e] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Running
	I0904 00:24:49.010043   11080 system_pods.go:74] duration metric: took 7.0998ms to wait for pod list to return data ...
	I0904 00:24:49.010043   11080 default_sa.go:34] waiting for default service account to be created ...
	I0904 00:24:49.014161   11080 default_sa.go:45] found service account: "default"
	I0904 00:24:49.014161   11080 default_sa.go:55] duration metric: took 4.1186ms for default service account to be created ...
	I0904 00:24:49.014161   11080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 00:24:49.017644   11080 system_pods.go:86] 12 kube-system pods found
	I0904 00:24:49.017644   11080 system_pods.go:89] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:24:49.017644   11080 system_pods.go:89] "etcd-multinode-477700" [f619f984-ced6-403e-bd87-70c0ad7b008d] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kindnet-gj9bp" [d46acd35-8083-498f-805b-ca4a3cf9ee14] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kindnet-ljv6w" [70d4500f-98bf-4e06-a7e6-b7e219dcb428] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-apiserver-multinode-477700" [dbbb2637-2c76-4436-aeaf-47e07cf0b8cb] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-proxy-lnh8p" [16cf2fb9-db73-4972-a48b-e5492d3bd79f] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-proxy-rbxm9" [cf3a297f-0ef0-418b-ba87-3f2966bba73e] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Running
	I0904 00:24:49.017644   11080 system_pods.go:126] duration metric: took 3.4825ms to wait for k8s-apps to be running ...
	I0904 00:24:49.017644   11080 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 00:24:49.032027   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 00:24:49.060447   11080 system_svc.go:56] duration metric: took 42.803ms WaitForService to wait for kubelet
	I0904 00:24:49.060509   11080 kubeadm.go:578] duration metric: took 43.5713297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 00:24:49.060574   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0904 00:24:49.064990   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:49.065042   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:49.065112   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:49.065112   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:49.065112   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:49.065183   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:49.065183   11080 node_conditions.go:105] duration metric: took 4.6085ms to run NodePressure ...
	I0904 00:24:49.065183   11080 start.go:241] waiting for startup goroutines ...
	I0904 00:24:49.065278   11080 start.go:246] waiting for cluster config update ...
	I0904 00:24:49.065337   11080 start.go:255] writing updated cluster config ...
	I0904 00:24:49.069773   11080 out.go:203] 
	I0904 00:24:49.073016   11080 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:24:49.084033   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:24:49.084033   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:24:49.089976   11080 out.go:179] * Starting "multinode-477700-m02" worker node in "multinode-477700" cluster
	I0904 00:24:49.095471   11080 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0904 00:24:49.095471   11080 cache.go:58] Caching tarball of preloaded images
	I0904 00:24:49.095748   11080 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0904 00:24:49.095748   11080 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0904 00:24:49.095748   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:24:49.098937   11080 start.go:360] acquireMachinesLock for multinode-477700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 00:24:49.099161   11080 start.go:364] duration metric: took 108.5µs to acquireMachinesLock for "multinode-477700-m02"
	I0904 00:24:49.099285   11080 start.go:96] Skipping create...Using existing machine configuration
	I0904 00:24:49.099285   11080 fix.go:54] fixHost starting: m02
	I0904 00:24:49.100005   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:24:51.131472   11080 main.go:141] libmachine: [stdout =====>] : Off
	
	I0904 00:24:51.132494   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:24:51.132523   11080 fix.go:112] recreateIfNeeded on multinode-477700-m02: state=Stopped err=<nil>
	W0904 00:24:51.132562   11080 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 00:24:51.137809   11080 out.go:252] * Restarting existing hyperv VM for "multinode-477700-m02" ...
	I0904 00:24:51.138053   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-477700-m02
	I0904 00:24:54.170456   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:24:54.170456   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:24:54.170456   11080 main.go:141] libmachine: Waiting for host to start...
	I0904 00:24:54.170456   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:24:56.371836   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:24:56.371836   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:24:56.371836   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:24:58.815221   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:24:58.815577   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:24:59.816458   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:01.987275   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:01.987275   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:01.987275   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:04.465033   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:25:04.465033   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:05.466313   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:07.615293   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:07.616310   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:07.616310   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:10.093217   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:25:10.093283   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:11.093756   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:13.234446   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:13.234446   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:13.234446   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:15.726366   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:25:15.726796   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:16.727967   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:18.863999   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:18.863999   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:18.864477   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:21.419110   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:21.419110   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:21.422199   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:23.594740   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:23.594926   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:23.595008   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:26.168522   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:26.168522   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:26.169038   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:25:26.171980   11080 machine.go:93] provisionDockerMachine start ...
	I0904 00:25:26.171980   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:28.290799   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:28.290832   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:28.290832   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:30.968338   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:30.968539   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:30.974964   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:25:30.975788   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:25:30.975788   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 00:25:31.104710   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 00:25:31.104768   11080 buildroot.go:166] provisioning hostname "multinode-477700-m02"
	I0904 00:25:31.104837   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:33.247116   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:33.247116   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:33.247861   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:35.795449   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:35.795449   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:35.801989   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:25:35.802316   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:25:35.802316   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-477700-m02 && echo "multinode-477700-m02" | sudo tee /etc/hostname
	I0904 00:25:35.950213   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-477700-m02
	
	I0904 00:25:35.950385   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:38.097286   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:38.097286   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:38.097286   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:40.571167   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:40.571167   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:40.577061   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:25:40.577612   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:25:40.577612   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-477700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-477700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-477700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 00:25:40.717033   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 00:25:40.717033   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0904 00:25:40.717033   11080 buildroot.go:174] setting up certificates
	I0904 00:25:40.717033   11080 provision.go:84] configureAuth start
	I0904 00:25:40.717033   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:42.768208   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:42.768208   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:42.769064   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:45.241408   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:45.241408   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:45.241408   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:47.325193   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:47.325517   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:47.325517   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:49.886176   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:49.886176   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:49.886176   11080 provision.go:143] copyHostCerts
	I0904 00:25:49.886913   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0904 00:25:49.887060   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0904 00:25:49.887060   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0904 00:25:49.887716   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0904 00:25:49.889108   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0904 00:25:49.889421   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0904 00:25:49.889463   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0904 00:25:49.889687   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0904 00:25:49.890404   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0904 00:25:49.891384   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0904 00:25:49.891384   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0904 00:25:49.891688   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0904 00:25:49.892489   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-477700-m02 san=[127.0.0.1 172.25.123.14 localhost minikube multinode-477700-m02]
	I0904 00:25:50.114539   11080 provision.go:177] copyRemoteCerts
	I0904 00:25:50.126641   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 00:25:50.126641   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:52.190332   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:52.191309   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:52.191407   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:54.684710   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:54.685127   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:54.686074   11080 sshutil.go:53] new ssh client: &{IP:172.25.123.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:25:54.802062   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6753586s)
	I0904 00:25:54.802154   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0904 00:25:54.802495   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 00:25:54.855915   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0904 00:25:54.856067   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0904 00:25:54.909487   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0904 00:25:54.909646   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 00:25:54.959257   11080 provision.go:87] duration metric: took 14.2420332s to configureAuth
	I0904 00:25:54.959340   11080 buildroot.go:189] setting minikube options for container-runtime
	I0904 00:25:54.960101   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:25:54.960175   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:57.019585   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:57.020613   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:57.020699   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:59.525142   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:59.525224   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:59.531789   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:25:59.532528   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:25:59.532528   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0904 00:25:59.666075   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0904 00:25:59.666075   11080 buildroot.go:70] root file system type: tmpfs
	I0904 00:25:59.666075   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0904 00:25:59.666075   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:01.721342   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:01.721342   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:01.721342   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:04.210280   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:04.211340   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:04.218110   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:26:04.218839   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:26:04.218839   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.25.112.78"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0904 00:26:04.381441   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.25.112.78
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0904 00:26:04.381559   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:06.470781   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:06.470781   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:06.470781   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:08.977868   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:08.977868   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:08.983945   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:26:08.984614   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:26:08.984614   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0904 00:26:10.533120   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0904 00:26:10.533120   11080 machine.go:96] duration metric: took 44.3605443s to provisionDockerMachine
	I0904 00:26:10.533120   11080 start.go:293] postStartSetup for "multinode-477700-m02" (driver="hyperv")
	I0904 00:26:10.533120   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 00:26:10.545641   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 00:26:10.545641   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:12.613161   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:12.613860   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:12.613907   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:15.147654   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:15.147826   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:15.148276   11080 sshutil.go:53] new ssh client: &{IP:172.25.123.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:26:15.258020   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.712221s)
	I0904 00:26:15.271721   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 00:26:15.280557   11080 info.go:137] Remote host: Buildroot 2025.02
	I0904 00:26:15.280557   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0904 00:26:15.280557   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0904 00:26:15.282277   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0904 00:26:15.282277   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0904 00:26:15.295706   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 00:26:15.316974   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0904 00:26:15.371091   11080 start.go:296] duration metric: took 4.837906s for postStartSetup
	I0904 00:26:15.371091   11080 fix.go:56] duration metric: took 1m26.2706448s for fixHost
	I0904 00:26:15.371091   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:17.504527   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:17.504527   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:17.504527   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:20.013981   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:20.013981   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:20.019577   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:26:20.019842   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:26:20.020475   11080 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 00:26:20.151981   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756945580.153378502
	
	I0904 00:26:20.151981   11080 fix.go:216] guest clock: 1756945580.153378502
	I0904 00:26:20.151981   11080 fix.go:229] Guest: 2025-09-04 00:26:20.153378502 +0000 UTC Remote: 2025-09-04 00:26:15.3710915 +0000 UTC m=+261.631908301 (delta=4.782287002s)
	I0904 00:26:20.152155   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:22.251815   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:22.252910   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:22.252910   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:24.763825   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:24.763825   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:24.769586   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:26:24.770594   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:26:24.770594   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756945580
	I0904 00:26:24.910687   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Sep  4 00:26:20 UTC 2025
	
	I0904 00:26:24.910720   11080 fix.go:236] clock set: Thu Sep  4 00:26:20 UTC 2025
	 (err=<nil>)
	I0904 00:26:24.910798   11080 start.go:83] releasing machines lock for "multinode-477700-m02", held for 1m35.8102217s
	I0904 00:26:24.911070   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:27.013725   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:27.014702   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:27.014814   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:29.548019   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:29.548019   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:29.552347   11080 out.go:179] * Found network options:
	I0904 00:26:29.555594   11080 out.go:179]   - NO_PROXY=172.25.112.78
	W0904 00:26:29.558148   11080 proxy.go:120] fail to check proxy env: Error ip not in block
	I0904 00:26:29.560499   11080 out.go:179]   - NO_PROXY=172.25.112.78
	W0904 00:26:29.563455   11080 proxy.go:120] fail to check proxy env: Error ip not in block
	W0904 00:26:29.565490   11080 proxy.go:120] fail to check proxy env: Error ip not in block
	I0904 00:26:29.567485   11080 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0904 00:26:29.567485   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:29.577459   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 00:26:29.577459   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:31.736330   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:31.736686   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:31.736686   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:31.737409   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:31.737409   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:31.737409   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:34.361806   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:34.362490   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:34.362928   11080 sshutil.go:53] new ssh client: &{IP:172.25.123.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:26:34.393279   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:34.393279   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:34.394580   11080 sshutil.go:53] new ssh client: &{IP:172.25.123.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:26:34.467281   11080 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8993202s)
	W0904 00:26:34.467281   11080 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0904 00:26:34.486658   11080 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9091333s)
	W0904 00:26:34.486720   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 00:26:34.498801   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 00:26:34.535372   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 00:26:34.535372   11080 start.go:495] detecting cgroup driver to use...
	I0904 00:26:34.535804   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 00:26:34.591900   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0904 00:26:34.626628   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0904 00:26:34.639171   11080 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0904 00:26:34.639286   11080 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0904 00:26:34.652692   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 00:26:34.664369   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 00:26:34.697886   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 00:26:34.730993   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 00:26:34.763798   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 00:26:34.796156   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 00:26:34.829332   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 00:26:34.863802   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 00:26:34.896669   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 00:26:34.929767   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 00:26:34.950960   11080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 00:26:34.964050   11080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 00:26:34.998024   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 00:26:35.047952   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:26:35.278372   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 00:26:35.342492   11080 start.go:495] detecting cgroup driver to use...
	I0904 00:26:35.353493   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0904 00:26:35.397487   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 00:26:35.433432   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 00:26:35.477531   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 00:26:35.516523   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 00:26:35.553476   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 00:26:35.623758   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 00:26:35.650744   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 00:26:35.700526   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0904 00:26:35.718007   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0904 00:26:35.738743   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0904 00:26:35.789800   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0904 00:26:36.034979   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0904 00:26:36.268919   11080 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0904 00:26:36.269018   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0904 00:26:36.320780   11080 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0904 00:26:36.358186   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:26:36.597109   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 00:26:37.446377   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 00:26:37.488739   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0904 00:26:37.530024   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 00:26:37.572932   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0904 00:26:37.821134   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0904 00:26:38.071084   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:26:38.311393   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0904 00:26:38.376687   11080 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0904 00:26:38.413818   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:26:38.652518   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0904 00:26:38.812387   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 00:26:38.847680   11080 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0904 00:26:38.860548   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0904 00:26:38.871623   11080 start.go:563] Will wait 60s for crictl version
	I0904 00:26:38.884332   11080 ssh_runner.go:195] Run: which crictl
	I0904 00:26:38.901990   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 00:26:38.959738   11080 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0904 00:26:38.970716   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 00:26:39.013976   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 00:26:39.055296   11080 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0904 00:26:39.057885   11080 out.go:179]   - env NO_PROXY=172.25.112.78
	I0904 00:26:39.059874   11080 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0904 00:26:39.064458   11080 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0904 00:26:39.064458   11080 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0904 00:26:39.064458   11080 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0904 00:26:39.064458   11080 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0904 00:26:39.067479   11080 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0904 00:26:39.067479   11080 ip.go:218] interface addr: 172.25.112.1/20
	I0904 00:26:39.081939   11080 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0904 00:26:39.088432   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 00:26:39.114372   11080 mustload.go:65] Loading cluster: multinode-477700
	I0904 00:26:39.115044   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:26:39.115707   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:26:41.182183   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:41.182183   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:41.182183   11080 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:26:41.183973   11080 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700 for IP: 172.25.123.14
	I0904 00:26:41.183973   11080 certs.go:194] generating shared ca certs ...
	I0904 00:26:41.184091   11080 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:26:41.184731   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0904 00:26:41.185078   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0904 00:26:41.185078   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0904 00:26:41.186512   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0904 00:26:41.186512   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0904 00:26:41.186512   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0904 00:26:41.187198   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0904 00:26:41.187970   11080 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0904 00:26:41.188062   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0904 00:26:41.188419   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0904 00:26:41.188644   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0904 00:26:41.188866   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0904 00:26:41.189434   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0904 00:26:41.189613   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0904 00:26:41.189613   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0904 00:26:41.189613   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:26:41.190432   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 00:26:41.247241   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 00:26:41.300774   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 00:26:41.352486   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 00:26:41.407183   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0904 00:26:41.459914   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0904 00:26:41.511069   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 00:26:41.581327   11080 ssh_runner.go:195] Run: openssl version
	I0904 00:26:41.602540   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 00:26:41.635439   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:26:41.642285   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:26:41.654910   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:26:41.678096   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 00:26:41.712973   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0904 00:26:41.748197   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0904 00:26:41.755561   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0904 00:26:41.768461   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0904 00:26:41.789473   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0904 00:26:41.822692   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0904 00:26:41.857824   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0904 00:26:41.865244   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0904 00:26:41.878723   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0904 00:26:41.903282   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 00:26:41.936848   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 00:26:41.944842   11080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 00:26:41.944842   11080 kubeadm.go:926] updating node {m02 172.25.123.14 8443 v1.34.0 docker false true} ...
	I0904 00:26:41.944842   11080 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-477700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.123.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 00:26:41.959547   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 00:26:41.980487   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 00:26:41.993133   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0904 00:26:42.014364   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0904 00:26:42.059353   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 00:26:42.123704   11080 ssh_runner.go:195] Run: grep 172.25.112.78	control-plane.minikube.internal$ /etc/hosts
	I0904 00:26:42.130834   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.112.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 00:26:42.175966   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:26:42.423039   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 00:26:42.482889   11080 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:26:42.484394   11080 start.go:317] joinCluster: &{Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.112.78 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.123.14 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.125.123 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 00:26:42.484586   11080 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.25.123.14 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0904 00:26:42.484645   11080 host.go:66] Checking if "multinode-477700-m02" exists ...
	I0904 00:26:42.485266   11080 mustload.go:65] Loading cluster: multinode-477700
	I0904 00:26:42.485585   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:26:42.486378   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:26:44.650825   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:44.651338   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:44.651338   11080 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:26:44.652160   11080 api_server.go:166] Checking apiserver status ...
	I0904 00:26:44.663203   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:26:44.663203   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:26:46.826624   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:46.827600   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:46.827983   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:49.320019   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:26:49.320019   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:49.321510   11080 sshutil.go:53] new ssh client: &{IP:172.25.112.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:26:49.450572   11080 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.7873042s)
	I0904 00:26:49.464391   11080 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2455/cgroup
	W0904 00:26:49.484842   11080 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2455/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 00:26:49.496635   11080 ssh_runner.go:195] Run: ls
	I0904 00:26:49.507262   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:26:49.518018   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 200:
	ok
	I0904 00:26:49.529876   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl drain multinode-477700-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0904 00:26:52.735999   11080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl drain multinode-477700-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.2060797s)
	I0904 00:26:52.735999   11080 node.go:128] successfully drained node "multinode-477700-m02"
	I0904 00:26:52.735999   11080 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0904 00:26:52.735999   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:54.812706   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:54.813661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:54.813735   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:57.302324   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:57.303046   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:57.303486   11080 sshutil.go:53] new ssh client: &{IP:172.25.123.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:26:58.139628   11080 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.4035561s)
	I0904 00:26:58.139628   11080 node.go:155] successfully reset node "multinode-477700-m02"
	I0904 00:26:58.141409   11080 kapi.go:59] client config for multinode-477700: &rest.Config{Host:"https://172.25.112.78:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 00:26:58.143070   11080 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0904 00:26:58.163813   11080 node.go:180] successfully deleted node "multinode-477700-m02"
	I0904 00:26:58.163813   11080 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.25.123.14 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0904 00:26:58.164803   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0904 00:26:58.164803   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:27:00.222650   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:27:00.222650   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:27:00.223175   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-477700" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-477700
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-477700: context deadline exceeded (63.9µs)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-477700" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-477700	172.25.126.63
multinode-477700-m02	172.25.125.181
multinode-477700-m03	172.25.125.123

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-477700 -n multinode-477700
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-477700 -n multinode-477700: (12.0335662s)
helpers_test.go:252: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 logs -n 25: (8.6856101s)
helpers_test.go:260: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                            ARGS                                                                                            │     PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ multinode-477700 cp testdata\cp-test.txt multinode-477700-m02:/home/docker/cp-test.txt                                                                                                     │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:12 UTC │ 04 Sep 25 00:12 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700-m02 sudo cat /home/docker/cp-test.txt                                                                                                             │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:12 UTC │ 04 Sep 25 00:12 UTC │
	│ cp      │ multinode-477700 cp multinode-477700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1527156918\001\cp-test_multinode-477700-m02.txt │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:12 UTC │ 04 Sep 25 00:12 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700-m02 sudo cat /home/docker/cp-test.txt                                                                                                             │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:12 UTC │ 04 Sep 25 00:12 UTC │
	│ cp      │ multinode-477700 cp multinode-477700-m02:/home/docker/cp-test.txt multinode-477700:/home/docker/cp-test_multinode-477700-m02_multinode-477700.txt                                          │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:12 UTC │ 04 Sep 25 00:13 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700-m02 sudo cat /home/docker/cp-test.txt                                                                                                             │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:13 UTC │ 04 Sep 25 00:13 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700 sudo cat /home/docker/cp-test_multinode-477700-m02_multinode-477700.txt                                                                           │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:13 UTC │ 04 Sep 25 00:13 UTC │
	│ cp      │ multinode-477700 cp multinode-477700-m02:/home/docker/cp-test.txt multinode-477700-m03:/home/docker/cp-test_multinode-477700-m02_multinode-477700-m03.txt                                  │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:13 UTC │ 04 Sep 25 00:13 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700-m02 sudo cat /home/docker/cp-test.txt                                                                                                             │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:13 UTC │ 04 Sep 25 00:13 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700-m03 sudo cat /home/docker/cp-test_multinode-477700-m02_multinode-477700-m03.txt                                                                   │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:13 UTC │ 04 Sep 25 00:14 UTC │
	│ cp      │ multinode-477700 cp testdata\cp-test.txt multinode-477700-m03:/home/docker/cp-test.txt                                                                                                     │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:14 UTC │ 04 Sep 25 00:14 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700-m03 sudo cat /home/docker/cp-test.txt                                                                                                             │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:14 UTC │ 04 Sep 25 00:14 UTC │
	│ cp      │ multinode-477700 cp multinode-477700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1527156918\001\cp-test_multinode-477700-m03.txt │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:14 UTC │ 04 Sep 25 00:14 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700-m03 sudo cat /home/docker/cp-test.txt                                                                                                             │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:14 UTC │ 04 Sep 25 00:14 UTC │
	│ cp      │ multinode-477700 cp multinode-477700-m03:/home/docker/cp-test.txt multinode-477700:/home/docker/cp-test_multinode-477700-m03_multinode-477700.txt                                          │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:14 UTC │ 04 Sep 25 00:14 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700-m03 sudo cat /home/docker/cp-test.txt                                                                                                             │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:14 UTC │ 04 Sep 25 00:15 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700 sudo cat /home/docker/cp-test_multinode-477700-m03_multinode-477700.txt                                                                           │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:15 UTC │ 04 Sep 25 00:15 UTC │
	│ cp      │ multinode-477700 cp multinode-477700-m03:/home/docker/cp-test.txt multinode-477700-m02:/home/docker/cp-test_multinode-477700-m03_multinode-477700-m02.txt                                  │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:15 UTC │ 04 Sep 25 00:15 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700-m03 sudo cat /home/docker/cp-test.txt                                                                                                             │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:15 UTC │ 04 Sep 25 00:15 UTC │
	│ ssh     │ multinode-477700 ssh -n multinode-477700-m02 sudo cat /home/docker/cp-test_multinode-477700-m03_multinode-477700-m02.txt                                                                   │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:15 UTC │ 04 Sep 25 00:15 UTC │
	│ node    │ multinode-477700 node stop m03                                                                                                                                                             │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:15 UTC │ 04 Sep 25 00:16 UTC │
	│ node    │ multinode-477700 node start m03 -v=5 --alsologtostderr                                                                                                                                     │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:17 UTC │ 04 Sep 25 00:19 UTC │
	│ node    │ list -p multinode-477700                                                                                                                                                                   │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:20 UTC │                     │
	│ stop    │ -p multinode-477700                                                                                                                                                                        │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:20 UTC │ 04 Sep 25 00:21 UTC │
	│ start   │ -p multinode-477700 --wait=true -v=5 --alsologtostderr                                                                                                                                     │ multinode-477700 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:21 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 00:21:53
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 00:21:53.830751   11080 out.go:360] Setting OutFile to fd 1044 ...
	I0904 00:21:53.908670   11080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 00:21:53.908670   11080 out.go:374] Setting ErrFile to fd 1244...
	I0904 00:21:53.908670   11080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 00:21:53.927252   11080 out.go:368] Setting JSON to false
	I0904 00:21:53.929759   11080 start.go:130] hostinfo: {"hostname":"minikube6","uptime":28419,"bootTime":1756916894,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0904 00:21:53.930734   11080 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0904 00:21:54.162920   11080 out.go:179] * [multinode-477700] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0904 00:21:54.209304   11080 notify.go:220] Checking for updates...
	I0904 00:21:54.222664   11080 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0904 00:21:54.270626   11080 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 00:21:54.315740   11080 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0904 00:21:54.328882   11080 out.go:179]   - MINIKUBE_LOCATION=21341
	I0904 00:21:54.356118   11080 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 00:21:54.364932   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:21:54.364932   11080 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 00:21:59.700221   11080 out.go:179] * Using the hyperv driver based on existing profile
	I0904 00:21:59.708846   11080 start.go:304] selected driver: hyperv
	I0904 00:21:59.708885   11080 start.go:918] validating driver "hyperv" against &{Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.126.63 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.125.123 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 00:21:59.708912   11080 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 00:21:59.765931   11080 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 00:21:59.765931   11080 cni.go:84] Creating CNI manager for ""
	I0904 00:21:59.766489   11080 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0904 00:21:59.766799   11080 start.go:348] cluster config:
	{Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.126.63 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.125.123 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 00:21:59.767173   11080 iso.go:125] acquiring lock: {Name:mk966bde02eeea119c68f0830e579f0a83ec9e11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 00:21:59.777512   11080 out.go:179] * Starting "multinode-477700" primary control-plane node in "multinode-477700" cluster
	I0904 00:21:59.783173   11080 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0904 00:21:59.783688   11080 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0904 00:21:59.783774   11080 cache.go:58] Caching tarball of preloaded images
	I0904 00:21:59.784037   11080 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0904 00:21:59.784200   11080 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0904 00:21:59.784701   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:21:59.787503   11080 start.go:360] acquireMachinesLock for multinode-477700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 00:21:59.787503   11080 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-477700"
	I0904 00:21:59.788152   11080 start.go:96] Skipping create...Using existing machine configuration
	I0904 00:21:59.788239   11080 fix.go:54] fixHost starting: 
	I0904 00:21:59.788996   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:02.453183   11080 main.go:141] libmachine: [stdout =====>] : Off
	
	I0904 00:22:02.454275   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:02.454448   11080 fix.go:112] recreateIfNeeded on multinode-477700: state=Stopped err=<nil>
	W0904 00:22:02.454514   11080 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 00:22:02.518803   11080 out.go:252] * Restarting existing hyperv VM for "multinode-477700" ...
	I0904 00:22:02.520495   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-477700
	I0904 00:22:05.617381   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:22:05.617381   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:05.617381   11080 main.go:141] libmachine: Waiting for host to start...
	I0904 00:22:05.617381   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:07.790361   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:07.790412   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:07.790532   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:10.288167   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:22:10.288167   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:11.289163   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:13.443324   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:13.443489   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:13.443489   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:15.930487   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:22:15.930765   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:16.930936   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:19.095594   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:19.096269   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:19.096269   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:21.563235   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:22:21.563235   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:22.565078   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:24.696931   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:24.696931   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:24.696931   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:27.193028   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:22:27.193650   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:28.193723   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:30.332788   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:30.332788   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:30.333826   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:32.846163   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:32.846163   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:32.849315   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:34.884654   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:34.885005   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:34.885005   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:37.354322   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:37.354322   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:37.355353   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:22:37.357440   11080 machine.go:93] provisionDockerMachine start ...
	I0904 00:22:37.358358   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:39.473855   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:39.474870   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:39.475020   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:41.907103   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:41.907134   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:41.913197   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:22:41.913880   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:22:41.913880   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 00:22:42.052582   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 00:22:42.052582   11080 buildroot.go:166] provisioning hostname "multinode-477700"
	I0904 00:22:42.052582   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:44.057552   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:44.057618   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:44.057990   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:46.457273   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:46.457273   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:46.464964   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:22:46.465120   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:22:46.465120   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-477700 && echo "multinode-477700" | sudo tee /etc/hostname
	I0904 00:22:46.636909   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-477700
	
	I0904 00:22:46.636909   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:48.631517   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:48.631517   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:48.631939   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:51.091840   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:51.091840   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:51.100935   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:22:51.100935   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:22:51.100935   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-477700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-477700/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-477700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 00:22:51.251091   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 00:22:51.251091   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0904 00:22:51.251091   11080 buildroot.go:174] setting up certificates
	I0904 00:22:51.251091   11080 provision.go:84] configureAuth start
	I0904 00:22:51.251091   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:53.317236   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:53.317236   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:53.318363   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:22:55.701933   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:22:55.702529   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:55.702529   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:22:57.731008   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:22:57.732150   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:22:57.732150   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:00.182342   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:00.182342   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:00.182539   11080 provision.go:143] copyHostCerts
	I0904 00:23:00.183049   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0904 00:23:00.183398   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0904 00:23:00.183519   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0904 00:23:00.184186   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0904 00:23:00.186001   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0904 00:23:00.186603   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0904 00:23:00.186603   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0904 00:23:00.187373   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0904 00:23:00.188962   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0904 00:23:00.189282   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0904 00:23:00.189282   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0904 00:23:00.190090   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0904 00:23:00.190937   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-477700 san=[127.0.0.1 172.25.112.78 localhost minikube multinode-477700]
	I0904 00:23:00.423594   11080 provision.go:177] copyRemoteCerts
	I0904 00:23:00.436508   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 00:23:00.436508   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:02.454944   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:02.454944   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:02.455412   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:04.864600   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:04.864765   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:04.865309   11080 sshutil.go:53] new ssh client: &{IP:172.25.112.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:23:04.984999   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5483161s)
	I0904 00:23:04.985103   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0904 00:23:04.985524   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 00:23:05.037788   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0904 00:23:05.037788   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0904 00:23:05.090714   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0904 00:23:05.090714   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 00:23:05.142464   11080 provision.go:87] duration metric: took 13.8911842s to configureAuth
	I0904 00:23:05.142464   11080 buildroot.go:189] setting minikube options for container-runtime
	I0904 00:23:05.143213   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:23:05.143302   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:07.103929   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:07.103929   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:07.103929   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:09.546365   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:09.546365   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:09.552997   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:23:09.553312   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:23:09.553312   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0904 00:23:09.699438   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0904 00:23:09.700005   11080 buildroot.go:70] root file system type: tmpfs
	I0904 00:23:09.700199   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0904 00:23:09.700199   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:11.723292   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:11.723292   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:11.723503   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:14.195537   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:14.196466   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:14.202735   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:23:14.203606   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:23:14.203606   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0904 00:23:14.373927   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0904 00:23:14.374149   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:16.365404   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:16.366574   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:16.366627   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:18.868457   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:18.869138   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:18.875760   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:23:18.876589   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:23:18.876589   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0904 00:23:20.602448   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0904 00:23:20.602448   11080 machine.go:96] duration metric: took 43.2444198s to provisionDockerMachine
	I0904 00:23:20.602448   11080 start.go:293] postStartSetup for "multinode-477700" (driver="hyperv")
	I0904 00:23:20.602448   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 00:23:20.613439   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 00:23:20.613439   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:22.777800   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:22.778267   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:22.778267   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:25.274825   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:25.275971   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:25.276422   11080 sshutil.go:53] new ssh client: &{IP:172.25.112.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:23:25.391333   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7778292s)
	I0904 00:23:25.405582   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 00:23:25.413257   11080 info.go:137] Remote host: Buildroot 2025.02
	I0904 00:23:25.413257   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0904 00:23:25.413257   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0904 00:23:25.414614   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0904 00:23:25.414706   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0904 00:23:25.426889   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 00:23:25.447418   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0904 00:23:25.500007   11080 start.go:296] duration metric: took 4.8974931s for postStartSetup
	I0904 00:23:25.500007   11080 fix.go:56] duration metric: took 1m25.7106027s for fixHost
	I0904 00:23:25.500007   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:27.535495   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:27.536344   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:27.536344   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:29.973646   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:29.973646   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:29.979368   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:23:29.979567   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:23:29.979567   11080 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 00:23:30.114942   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756945410.135584876
	
	I0904 00:23:30.115136   11080 fix.go:216] guest clock: 1756945410.135584876
	I0904 00:23:30.115136   11080 fix.go:229] Guest: 2025-09-04 00:23:30.135584876 +0000 UTC Remote: 2025-09-04 00:23:25.5000078 +0000 UTC m=+91.763115801 (delta=4.635577076s)
	I0904 00:23:30.115270   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:32.167447   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:32.167503   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:32.167503   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:34.633958   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:34.634420   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:34.640556   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:23:34.641189   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.112.78 22 <nil> <nil>}
	I0904 00:23:34.641189   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756945410
	I0904 00:23:34.790849   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Sep  4 00:23:30 UTC 2025
	
	I0904 00:23:34.790849   11080 fix.go:236] clock set: Thu Sep  4 00:23:30 UTC 2025
	 (err=<nil>)
	I0904 00:23:34.790849   11080 start.go:83] releasing machines lock for "multinode-477700", held for 1m35.0020545s
	I0904 00:23:34.791789   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:36.800343   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:36.800343   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:36.801010   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:39.293977   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:39.293977   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:39.299358   11080 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0904 00:23:39.299596   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:39.317231   11080 ssh_runner.go:195] Run: cat /version.json
	I0904 00:23:39.317499   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:23:41.458180   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:41.458180   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:41.458180   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:41.458180   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:23:41.458180   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:41.459162   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:23:44.105052   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:44.105052   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:44.105356   11080 sshutil.go:53] new ssh client: &{IP:172.25.112.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:23:44.128684   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:23:44.128684   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:23:44.129695   11080 sshutil.go:53] new ssh client: &{IP:172.25.112.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:23:44.198467   11080 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8989559s)
	W0904 00:23:44.198598   11080 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0904 00:23:44.231284   11080 ssh_runner.go:235] Completed: cat /version.json: (4.9139556s)
	I0904 00:23:44.246092   11080 ssh_runner.go:195] Run: systemctl --version
	I0904 00:23:44.270631   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 00:23:44.281650   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 00:23:44.294336   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 00:23:44.328601   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 00:23:44.328601   11080 start.go:495] detecting cgroup driver to use...
	I0904 00:23:44.329005   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0904 00:23:44.339070   11080 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0904 00:23:44.339168   11080 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0904 00:23:44.385278   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0904 00:23:44.421750   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0904 00:23:44.447137   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 00:23:44.462394   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 00:23:44.503962   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 00:23:44.543202   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 00:23:44.577063   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 00:23:44.612905   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 00:23:44.661199   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 00:23:44.694723   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 00:23:44.726341   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 00:23:44.756868   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 00:23:44.776495   11080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 00:23:44.787538   11080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 00:23:44.820594   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 00:23:44.849743   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:23:45.075413   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 00:23:45.136540   11080 start.go:495] detecting cgroup driver to use...
	I0904 00:23:45.146840   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0904 00:23:45.182265   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 00:23:45.213362   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 00:23:45.256226   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 00:23:45.296714   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 00:23:45.333262   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 00:23:45.396624   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 00:23:45.420897   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 00:23:45.468280   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0904 00:23:45.485286   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0904 00:23:45.506287   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0904 00:23:45.556267   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0904 00:23:45.790523   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0904 00:23:46.010920   11080 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0904 00:23:46.010920   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0904 00:23:46.064519   11080 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0904 00:23:46.099172   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:23:46.323229   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 00:23:47.147448   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 00:23:47.181995   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0904 00:23:47.223294   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 00:23:47.256810   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0904 00:23:47.478989   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0904 00:23:47.719263   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:23:47.960095   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0904 00:23:48.025713   11080 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0904 00:23:48.067607   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:23:48.317522   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0904 00:23:48.499541   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 00:23:48.527119   11080 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0904 00:23:48.540422   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0904 00:23:48.550993   11080 start.go:563] Will wait 60s for crictl version
	I0904 00:23:48.563274   11080 ssh_runner.go:195] Run: which crictl
	I0904 00:23:48.582634   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 00:23:48.651628   11080 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0904 00:23:48.661911   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 00:23:48.708642   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 00:23:48.750835   11080 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0904 00:23:48.750835   11080 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0904 00:23:48.754823   11080 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0904 00:23:48.754823   11080 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0904 00:23:48.754823   11080 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0904 00:23:48.754823   11080 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0904 00:23:48.757821   11080 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0904 00:23:48.757821   11080 ip.go:218] interface addr: 172.25.112.1/20
	I0904 00:23:48.769848   11080 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0904 00:23:48.777171   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 00:23:48.801144   11080 kubeadm.go:875] updating cluster {Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.112.78 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.125.123 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 00:23:48.801684   11080 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0904 00:23:48.811116   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 00:23:48.838421   11080 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0904 00:23:48.838421   11080 docker.go:621] Images already preloaded, skipping extraction
	I0904 00:23:48.847580   11080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 00:23:48.872099   11080 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0904 00:23:48.872161   11080 cache_images.go:85] Images are preloaded, skipping loading
	I0904 00:23:48.872161   11080 kubeadm.go:926] updating node { 172.25.112.78 8443 v1.34.0 docker true true} ...
	I0904 00:23:48.872666   11080 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-477700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.112.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 00:23:48.883194   11080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0904 00:23:48.948011   11080 cni.go:84] Creating CNI manager for ""
	I0904 00:23:48.948011   11080 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0904 00:23:48.948011   11080 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 00:23:48.948011   11080 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.112.78 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-477700 NodeName:multinode-477700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.112.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.112.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 00:23:48.949014   11080 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.112.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-477700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.112.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.112.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 00:23:48.961035   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 00:23:48.983839   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 00:23:48.999536   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 00:23:49.020511   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0904 00:23:49.070991   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 00:23:49.102645   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I0904 00:23:49.149363   11080 ssh_runner.go:195] Run: grep 172.25.112.78	control-plane.minikube.internal$ /etc/hosts
	I0904 00:23:49.156943   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.112.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 00:23:49.206124   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:23:49.440490   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 00:23:49.493522   11080 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700 for IP: 172.25.112.78
	I0904 00:23:49.493522   11080 certs.go:194] generating shared ca certs ...
	I0904 00:23:49.493522   11080 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:23:49.493522   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0904 00:23:49.494502   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0904 00:23:49.494502   11080 certs.go:256] generating profile certs ...
	I0904 00:23:49.495506   11080 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\client.key
	I0904 00:23:49.495506   11080 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.dbd75d0e
	I0904 00:23:49.495506   11080 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.dbd75d0e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.112.78]
	I0904 00:23:49.960723   11080 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.dbd75d0e ...
	I0904 00:23:49.960723   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.dbd75d0e: {Name:mkc4978833b15a71f00486612ea48025cdf11766 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:23:49.962716   11080 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.dbd75d0e ...
	I0904 00:23:49.962716   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.dbd75d0e: {Name:mkb61b9562aa8169185c7c3992ef11de4fa7a300 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:23:49.963461   11080 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt.dbd75d0e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt
	I0904 00:23:49.979350   11080 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key.dbd75d0e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key
	I0904 00:23:49.980836   11080 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key
	I0904 00:23:49.980836   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0904 00:23:49.981127   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0904 00:23:49.981347   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0904 00:23:49.981347   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0904 00:23:49.982444   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0904 00:23:49.982638   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0904 00:23:49.982837   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0904 00:23:49.983035   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0904 00:23:49.983236   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0904 00:23:49.983956   11080 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0904 00:23:49.983956   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0904 00:23:49.983956   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0904 00:23:49.984738   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0904 00:23:49.985306   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0904 00:23:49.985639   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0904 00:23:49.985639   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0904 00:23:49.986286   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0904 00:23:49.986669   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:23:49.987978   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 00:23:50.053485   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 00:23:50.110699   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 00:23:50.162718   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 00:23:50.213955   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0904 00:23:50.262310   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0904 00:23:50.311608   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 00:23:50.359763   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 00:23:50.408738   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0904 00:23:50.459603   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0904 00:23:50.500939   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 00:23:50.545159   11080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 00:23:50.589999   11080 ssh_runner.go:195] Run: openssl version
	I0904 00:23:50.615208   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 00:23:50.648532   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:23:50.659253   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:23:50.670810   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:23:50.695008   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 00:23:50.728954   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0904 00:23:50.771184   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0904 00:23:50.779825   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0904 00:23:50.789888   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0904 00:23:50.812361   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0904 00:23:50.846158   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0904 00:23:50.877425   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0904 00:23:50.884365   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0904 00:23:50.896334   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0904 00:23:50.915820   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 00:23:50.947513   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 00:23:50.967845   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0904 00:23:50.989518   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0904 00:23:51.012345   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0904 00:23:51.037223   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0904 00:23:51.059945   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0904 00:23:51.086041   11080 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0904 00:23:51.098169   11080 kubeadm.go:392] StartCluster: {Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
4.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.112.78 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.125.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.125.123 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 00:23:51.107832   11080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 00:23:51.147805   11080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 00:23:51.174384   11080 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0904 00:23:51.174519   11080 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0904 00:23:51.187760   11080 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0904 00:23:51.213778   11080 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0904 00:23:51.215223   11080 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-477700" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0904 00:23:51.215745   11080 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-477700" cluster setting kubeconfig missing "multinode-477700" context setting]
	I0904 00:23:51.216507   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:23:51.234761   11080 kapi.go:59] client config for multinode-477700: &rest.Config{Host:"https://172.25.112.78:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 00:23:51.236785   11080 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0904 00:23:51.236785   11080 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0904 00:23:51.236785   11080 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0904 00:23:51.236785   11080 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0904 00:23:51.236785   11080 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0904 00:23:51.236785   11080 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0904 00:23:51.248760   11080 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0904 00:23:51.267760   11080 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.25.126.63
	+  advertiseAddress: 172.25.112.78
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-477700"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.25.126.63"
	+      value: "172.25.112.78"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.25.126.63"]
	+  certSANs: ["127.0.0.1", "localhost", "172.25.112.78"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0904 00:23:51.267760   11080 kubeadm.go:1152] stopping kube-system containers ...
	I0904 00:23:51.275748   11080 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 00:23:51.305359   11080 docker.go:484] Stopping containers: [89b7640b7697 cd3b66b73cb4 7ec79c04c516 882d6e338723 3dd1de246060 a5c4aad9ef6f 71185e7e5e3a 4c1d437a10c4 0545be46c0c9 944ecb490268 2b011dd581a4 774d3869c70e 8b34bc6a82c9 e2706c7084c7 9b5837c04c52 be2ad3b809d0]
	I0904 00:23:51.314731   11080 ssh_runner.go:195] Run: docker stop 89b7640b7697 cd3b66b73cb4 7ec79c04c516 882d6e338723 3dd1de246060 a5c4aad9ef6f 71185e7e5e3a 4c1d437a10c4 0545be46c0c9 944ecb490268 2b011dd581a4 774d3869c70e 8b34bc6a82c9 e2706c7084c7 9b5837c04c52 be2ad3b809d0
	I0904 00:23:51.368330   11080 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0904 00:23:51.411099   11080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 00:23:51.432015   11080 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 00:23:51.432015   11080 kubeadm.go:157] found existing configuration files:
	
	I0904 00:23:51.443005   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 00:23:51.463008   11080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 00:23:51.474014   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 00:23:51.505120   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 00:23:51.523000   11080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 00:23:51.534862   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 00:23:51.565590   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 00:23:51.583073   11080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 00:23:51.595160   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 00:23:51.625086   11080 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 00:23:51.641069   11080 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 00:23:51.653073   11080 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 00:23:51.686003   11080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 00:23:51.704982   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:23:52.021787   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:23:54.223104   11080 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.2012122s)
	I0904 00:23:54.223196   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:23:54.578445   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:23:54.660171   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:23:54.781389   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0904 00:23:54.795755   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:23:55.295777   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:23:55.795594   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:23:56.296041   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:23:56.794673   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:23:56.827636   11080 api_server.go:72] duration metric: took 2.0461631s to wait for apiserver process to appear ...
	I0904 00:23:56.827636   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0904 00:23:56.827636   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:01.049057   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0904 00:24:01.049057   11080 api_server.go:103] status: https://172.25.112.78:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0904 00:24:01.050051   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:01.247152   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 00:24:01.247474   11080 api_server.go:103] status: https://172.25.112.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 00:24:01.328774   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:01.344034   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 00:24:01.344118   11080 api_server.go:103] status: https://172.25.112.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 00:24:01.828435   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:01.837035   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 00:24:01.837035   11080 api_server.go:103] status: https://172.25.112.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 00:24:02.328357   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:02.338095   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0904 00:24:02.338095   11080 api_server.go:103] status: https://172.25.112.78:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0904 00:24:02.828160   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:02.836449   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 200:
	ok
	I0904 00:24:02.856741   11080 api_server.go:141] control plane version: v1.34.0
	I0904 00:24:02.856741   11080 api_server.go:131] duration metric: took 6.0290238s to wait for apiserver health ...
	I0904 00:24:02.856741   11080 cni.go:84] Creating CNI manager for ""
	I0904 00:24:02.856813   11080 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0904 00:24:02.861636   11080 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0904 00:24:02.880273   11080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0904 00:24:02.896192   11080 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0904 00:24:02.896192   11080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0904 00:24:03.036013   11080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0904 00:24:04.760395   11080 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.7243588s)
	I0904 00:24:04.760506   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 00:24:04.793520   11080 system_pods.go:59] 12 kube-system pods found
	I0904 00:24:04.793520   11080 system_pods.go:61] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:24:04.793520   11080 system_pods.go:61] "etcd-multinode-477700" [f619f984-ced6-403e-bd87-70c0ad7b008d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0904 00:24:04.793520   11080 system_pods.go:61] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0904 00:24:04.793520   11080 system_pods.go:61] "kindnet-gj9bp" [d46acd35-8083-498f-805b-ca4a3cf9ee14] Running
	I0904 00:24:04.793520   11080 system_pods.go:61] "kindnet-ljv6w" [70d4500f-98bf-4e06-a7e6-b7e219dcb428] Running
	I0904 00:24:04.793520   11080 system_pods.go:61] "kube-apiserver-multinode-477700" [dbbb2637-2c76-4436-aeaf-47e07cf0b8cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0904 00:24:04.793520   11080 system_pods.go:61] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0904 00:24:04.793520   11080 system_pods.go:61] "kube-proxy-lnh8p" [16cf2fb9-db73-4972-a48b-e5492d3bd79f] Running
	I0904 00:24:04.793520   11080 system_pods.go:61] "kube-proxy-rbxm9" [cf3a297f-0ef0-418b-ba87-3f2966bba73e] Running
	I0904 00:24:04.794138   11080 system_pods.go:61] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0904 00:24:04.794138   11080 system_pods.go:61] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0904 00:24:04.794138   11080 system_pods.go:61] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Running
	I0904 00:24:04.794138   11080 system_pods.go:74] duration metric: took 33.6314ms to wait for pod list to return data ...
	I0904 00:24:04.794225   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0904 00:24:04.811537   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:04.811639   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:04.811708   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:04.811708   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:04.811784   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:04.811784   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:04.811850   11080 node_conditions.go:105] duration metric: took 17.6249ms to run NodePressure ...
	I0904 00:24:04.811935   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0904 00:24:05.447591   11080 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0904 00:24:05.453888   11080 kubeadm.go:735] kubelet initialised
	I0904 00:24:05.453995   11080 kubeadm.go:736] duration metric: took 6.4043ms waiting for restarted kubelet to initialise ...
	I0904 00:24:05.453995   11080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 00:24:05.484523   11080 ops.go:34] apiserver oom_adj: -16
	I0904 00:24:05.484523   11080 kubeadm.go:593] duration metric: took 14.3098102s to restartPrimaryControlPlane
	I0904 00:24:05.484523   11080 kubeadm.go:394] duration metric: took 14.386159s to StartCluster
	I0904 00:24:05.484523   11080 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:24:05.484523   11080 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0904 00:24:05.487385   11080 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:24:05.488591   11080 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.112.78 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 00:24:05.488591   11080 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0904 00:24:05.489154   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:24:05.493519   11080 out.go:179] * Verifying Kubernetes components...
	I0904 00:24:05.497772   11080 out.go:179] * Enabled addons: 
	I0904 00:24:05.502489   11080 addons.go:514] duration metric: took 13.8972ms for enable addons: enabled=[]
	I0904 00:24:05.512987   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:24:05.899735   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 00:24:05.933828   11080 node_ready.go:35] waiting up to 6m0s for node "multinode-477700" to be "Ready" ...
	W0904 00:24:07.941319   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:10.442659   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:12.939126   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:14.940399   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:17.441390   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:19.940942   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:22.440638   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:24.943153   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:27.439415   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:29.440361   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:31.940178   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:34.438539   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:36.947640   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:39.440875   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:41.939386   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:44.440068   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	W0904 00:24:46.443378   11080 node_ready.go:57] node "multinode-477700" has "Ready":"False" status (will retry)
	I0904 00:24:48.939306   11080 node_ready.go:49] node "multinode-477700" is "Ready"
	I0904 00:24:48.939475   11080 node_ready.go:38] duration metric: took 43.0049851s for node "multinode-477700" to be "Ready" ...
	I0904 00:24:48.939475   11080 api_server.go:52] waiting for apiserver process to appear ...
	I0904 00:24:48.951580   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:24:48.992051   11080 api_server.go:72] duration metric: took 43.5028718s to wait for apiserver process to appear ...
	I0904 00:24:48.992051   11080 api_server.go:88] waiting for apiserver healthz status ...
	I0904 00:24:48.992051   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:24:49.001309   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 200:
	ok
	I0904 00:24:49.002842   11080 api_server.go:141] control plane version: v1.34.0
	I0904 00:24:49.002943   11080 api_server.go:131] duration metric: took 10.8921ms to wait for apiserver health ...
	I0904 00:24:49.002943   11080 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 00:24:49.009997   11080 system_pods.go:59] 12 kube-system pods found
	I0904 00:24:49.010043   11080 system_pods.go:61] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:24:49.010043   11080 system_pods.go:61] "etcd-multinode-477700" [f619f984-ced6-403e-bd87-70c0ad7b008d] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kindnet-gj9bp" [d46acd35-8083-498f-805b-ca4a3cf9ee14] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kindnet-ljv6w" [70d4500f-98bf-4e06-a7e6-b7e219dcb428] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-apiserver-multinode-477700" [dbbb2637-2c76-4436-aeaf-47e07cf0b8cb] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-proxy-lnh8p" [16cf2fb9-db73-4972-a48b-e5492d3bd79f] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-proxy-rbxm9" [cf3a297f-0ef0-418b-ba87-3f2966bba73e] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:24:49.010043   11080 system_pods.go:61] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Running
	I0904 00:24:49.010043   11080 system_pods.go:74] duration metric: took 7.0998ms to wait for pod list to return data ...
	I0904 00:24:49.010043   11080 default_sa.go:34] waiting for default service account to be created ...
	I0904 00:24:49.014161   11080 default_sa.go:45] found service account: "default"
	I0904 00:24:49.014161   11080 default_sa.go:55] duration metric: took 4.1186ms for default service account to be created ...
	I0904 00:24:49.014161   11080 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 00:24:49.017644   11080 system_pods.go:86] 12 kube-system pods found
	I0904 00:24:49.017644   11080 system_pods.go:89] "coredns-66bc5c9577-mg9nc" [39d4fb7b-1473-4a4e-9fb1-ce058a1c4904] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0904 00:24:49.017644   11080 system_pods.go:89] "etcd-multinode-477700" [f619f984-ced6-403e-bd87-70c0ad7b008d] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kindnet-gdpss" [2af7872d-5ba2-4df0-89ef-eb2c46ddd319] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kindnet-gj9bp" [d46acd35-8083-498f-805b-ca4a3cf9ee14] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kindnet-ljv6w" [70d4500f-98bf-4e06-a7e6-b7e219dcb428] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-apiserver-multinode-477700" [dbbb2637-2c76-4436-aeaf-47e07cf0b8cb] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-controller-manager-multinode-477700" [4171909c-4c75-4c40-9e8f-89b31bfd0f3a] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-proxy-lnh8p" [16cf2fb9-db73-4972-a48b-e5492d3bd79f] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-proxy-rbxm9" [cf3a297f-0ef0-418b-ba87-3f2966bba73e] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-proxy-v9bfx" [2e72957a-51b3-4f18-876a-32d17f1fcb01] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "kube-scheduler-multinode-477700" [9600bbee-3d89-49b5-9e4a-2b6eb499de52] Running
	I0904 00:24:49.017644   11080 system_pods.go:89] "storage-provisioner" [6ff776d2-685f-4111-bbe0-2d7f616fed2a] Running
	I0904 00:24:49.017644   11080 system_pods.go:126] duration metric: took 3.4825ms to wait for k8s-apps to be running ...
	I0904 00:24:49.017644   11080 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 00:24:49.032027   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 00:24:49.060447   11080 system_svc.go:56] duration metric: took 42.803ms WaitForService to wait for kubelet
	I0904 00:24:49.060509   11080 kubeadm.go:578] duration metric: took 43.5713297s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 00:24:49.060574   11080 node_conditions.go:102] verifying NodePressure condition ...
	I0904 00:24:49.064990   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:49.065042   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:49.065112   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:49.065112   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:49.065112   11080 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 00:24:49.065183   11080 node_conditions.go:123] node cpu capacity is 2
	I0904 00:24:49.065183   11080 node_conditions.go:105] duration metric: took 4.6085ms to run NodePressure ...
	I0904 00:24:49.065183   11080 start.go:241] waiting for startup goroutines ...
	I0904 00:24:49.065278   11080 start.go:246] waiting for cluster config update ...
	I0904 00:24:49.065337   11080 start.go:255] writing updated cluster config ...
	I0904 00:24:49.069773   11080 out.go:203] 
	I0904 00:24:49.073016   11080 config.go:182] Loaded profile config "ha-270000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:24:49.084033   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:24:49.084033   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:24:49.089976   11080 out.go:179] * Starting "multinode-477700-m02" worker node in "multinode-477700" cluster
	I0904 00:24:49.095471   11080 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0904 00:24:49.095471   11080 cache.go:58] Caching tarball of preloaded images
	I0904 00:24:49.095748   11080 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0904 00:24:49.095748   11080 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0904 00:24:49.095748   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:24:49.098937   11080 start.go:360] acquireMachinesLock for multinode-477700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 00:24:49.099161   11080 start.go:364] duration metric: took 108.5µs to acquireMachinesLock for "multinode-477700-m02"
	I0904 00:24:49.099285   11080 start.go:96] Skipping create...Using existing machine configuration
	I0904 00:24:49.099285   11080 fix.go:54] fixHost starting: m02
	I0904 00:24:49.100005   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:24:51.131472   11080 main.go:141] libmachine: [stdout =====>] : Off
	
	I0904 00:24:51.132494   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:24:51.132523   11080 fix.go:112] recreateIfNeeded on multinode-477700-m02: state=Stopped err=<nil>
	W0904 00:24:51.132562   11080 fix.go:138] unexpected machine state, will restart: <nil>
	I0904 00:24:51.137809   11080 out.go:252] * Restarting existing hyperv VM for "multinode-477700-m02" ...
	I0904 00:24:51.138053   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-477700-m02
	I0904 00:24:54.170456   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:24:54.170456   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:24:54.170456   11080 main.go:141] libmachine: Waiting for host to start...
	I0904 00:24:54.170456   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:24:56.371836   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:24:56.371836   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:24:56.371836   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:24:58.815221   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:24:58.815577   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:24:59.816458   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:01.987275   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:01.987275   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:01.987275   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:04.465033   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:25:04.465033   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:05.466313   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:07.615293   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:07.616310   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:07.616310   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:10.093217   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:25:10.093283   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:11.093756   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:13.234446   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:13.234446   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:13.234446   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:15.726366   11080 main.go:141] libmachine: [stdout =====>] : 
	I0904 00:25:15.726796   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:16.727967   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:18.863999   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:18.863999   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:18.864477   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:21.419110   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:21.419110   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:21.422199   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:23.594740   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:23.594926   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:23.595008   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:26.168522   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:26.168522   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:26.169038   11080 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700\config.json ...
	I0904 00:25:26.171980   11080 machine.go:93] provisionDockerMachine start ...
	I0904 00:25:26.171980   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:28.290799   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:28.290832   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:28.290832   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:30.968338   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:30.968539   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:30.974964   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:25:30.975788   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:25:30.975788   11080 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 00:25:31.104710   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 00:25:31.104768   11080 buildroot.go:166] provisioning hostname "multinode-477700-m02"
	I0904 00:25:31.104837   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:33.247116   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:33.247116   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:33.247861   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:35.795449   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:35.795449   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:35.801989   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:25:35.802316   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:25:35.802316   11080 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-477700-m02 && echo "multinode-477700-m02" | sudo tee /etc/hostname
	I0904 00:25:35.950213   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-477700-m02
	
	I0904 00:25:35.950385   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:38.097286   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:38.097286   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:38.097286   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:40.571167   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:40.571167   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:40.577061   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:25:40.577612   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:25:40.577612   11080 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-477700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-477700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-477700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 00:25:40.717033   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 00:25:40.717033   11080 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0904 00:25:40.717033   11080 buildroot.go:174] setting up certificates
	I0904 00:25:40.717033   11080 provision.go:84] configureAuth start
	I0904 00:25:40.717033   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:42.768208   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:42.768208   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:42.769064   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:45.241408   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:45.241408   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:45.241408   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:47.325193   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:47.325517   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:47.325517   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:49.886176   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:49.886176   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:49.886176   11080 provision.go:143] copyHostCerts
	I0904 00:25:49.886913   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0904 00:25:49.887060   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0904 00:25:49.887060   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0904 00:25:49.887716   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0904 00:25:49.889108   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0904 00:25:49.889421   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0904 00:25:49.889463   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0904 00:25:49.889687   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0904 00:25:49.890404   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0904 00:25:49.891384   11080 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0904 00:25:49.891384   11080 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0904 00:25:49.891688   11080 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0904 00:25:49.892489   11080 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-477700-m02 san=[127.0.0.1 172.25.123.14 localhost minikube multinode-477700-m02]
	I0904 00:25:50.114539   11080 provision.go:177] copyRemoteCerts
	I0904 00:25:50.126641   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 00:25:50.126641   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:52.190332   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:52.191309   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:52.191407   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:54.684710   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:54.685127   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:54.686074   11080 sshutil.go:53] new ssh client: &{IP:172.25.123.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:25:54.802062   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6753586s)
	I0904 00:25:54.802154   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0904 00:25:54.802495   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0904 00:25:54.855915   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0904 00:25:54.856067   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0904 00:25:54.909487   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0904 00:25:54.909646   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0904 00:25:54.959257   11080 provision.go:87] duration metric: took 14.2420332s to configureAuth
	I0904 00:25:54.959340   11080 buildroot.go:189] setting minikube options for container-runtime
	I0904 00:25:54.960101   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:25:54.960175   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:25:57.019585   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:25:57.020613   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:57.020699   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:25:59.525142   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:25:59.525224   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:25:59.531789   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:25:59.532528   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:25:59.532528   11080 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0904 00:25:59.666075   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0904 00:25:59.666075   11080 buildroot.go:70] root file system type: tmpfs
	I0904 00:25:59.666075   11080 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0904 00:25:59.666075   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:01.721342   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:01.721342   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:01.721342   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:04.210280   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:04.211340   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:04.218110   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:26:04.218839   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:26:04.218839   11080 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=172.25.112.78"
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0904 00:26:04.381441   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=172.25.112.78
	
	
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0904 00:26:04.381559   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:06.470781   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:06.470781   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:06.470781   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:08.977868   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:08.977868   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:08.983945   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:26:08.984614   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:26:08.984614   11080 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0904 00:26:10.533120   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink '/etc/systemd/system/multi-user.target.wants/docker.service' → '/usr/lib/systemd/system/docker.service'.
	
	I0904 00:26:10.533120   11080 machine.go:96] duration metric: took 44.3605443s to provisionDockerMachine
	I0904 00:26:10.533120   11080 start.go:293] postStartSetup for "multinode-477700-m02" (driver="hyperv")
	I0904 00:26:10.533120   11080 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 00:26:10.545641   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 00:26:10.545641   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:12.613161   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:12.613860   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:12.613907   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:15.147654   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:15.147826   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:15.148276   11080 sshutil.go:53] new ssh client: &{IP:172.25.123.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:26:15.258020   11080 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.712221s)
	I0904 00:26:15.271721   11080 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 00:26:15.280557   11080 info.go:137] Remote host: Buildroot 2025.02
	I0904 00:26:15.280557   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0904 00:26:15.280557   11080 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0904 00:26:15.282277   11080 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> 22202.pem in /etc/ssl/certs
	I0904 00:26:15.282277   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /etc/ssl/certs/22202.pem
	I0904 00:26:15.295706   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0904 00:26:15.316974   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /etc/ssl/certs/22202.pem (1708 bytes)
	I0904 00:26:15.371091   11080 start.go:296] duration metric: took 4.837906s for postStartSetup
	I0904 00:26:15.371091   11080 fix.go:56] duration metric: took 1m26.2706448s for fixHost
	I0904 00:26:15.371091   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:17.504527   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:17.504527   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:17.504527   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:20.013981   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:20.013981   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:20.019577   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:26:20.019842   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:26:20.020475   11080 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 00:26:20.151981   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756945580.153378502
	
	I0904 00:26:20.151981   11080 fix.go:216] guest clock: 1756945580.153378502
	I0904 00:26:20.151981   11080 fix.go:229] Guest: 2025-09-04 00:26:20.153378502 +0000 UTC Remote: 2025-09-04 00:26:15.3710915 +0000 UTC m=+261.631908301 (delta=4.782287002s)
	I0904 00:26:20.152155   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:22.251815   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:22.252910   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:22.252910   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:24.763825   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:24.763825   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:24.769586   11080 main.go:141] libmachine: Using SSH client type: native
	I0904 00:26:24.770594   11080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.14 22 <nil> <nil>}
	I0904 00:26:24.770594   11080 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1756945580
	I0904 00:26:24.910687   11080 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Sep  4 00:26:20 UTC 2025
	
	I0904 00:26:24.910720   11080 fix.go:236] clock set: Thu Sep  4 00:26:20 UTC 2025
	 (err=<nil>)
	I0904 00:26:24.910798   11080 start.go:83] releasing machines lock for "multinode-477700-m02", held for 1m35.8102217s
	I0904 00:26:24.911070   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:27.013725   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:27.014702   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:27.014814   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:29.548019   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:29.548019   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:29.552347   11080 out.go:179] * Found network options:
	I0904 00:26:29.555594   11080 out.go:179]   - NO_PROXY=172.25.112.78
	W0904 00:26:29.558148   11080 proxy.go:120] fail to check proxy env: Error ip not in block
	I0904 00:26:29.560499   11080 out.go:179]   - NO_PROXY=172.25.112.78
	W0904 00:26:29.563455   11080 proxy.go:120] fail to check proxy env: Error ip not in block
	W0904 00:26:29.565490   11080 proxy.go:120] fail to check proxy env: Error ip not in block
	I0904 00:26:29.567485   11080 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0904 00:26:29.567485   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:29.577459   11080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0904 00:26:29.577459   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:31.736330   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:31.736686   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:31.736686   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:31.737409   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:31.737409   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:31.737409   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:34.361806   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:34.362490   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:34.362928   11080 sshutil.go:53] new ssh client: &{IP:172.25.123.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:26:34.393279   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:34.393279   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:34.394580   11080 sshutil.go:53] new ssh client: &{IP:172.25.123.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:26:34.467281   11080 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8993202s)
	W0904 00:26:34.467281   11080 start.go:868] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0904 00:26:34.486658   11080 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9091333s)
	W0904 00:26:34.486720   11080 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 00:26:34.498801   11080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 00:26:34.535372   11080 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 00:26:34.535372   11080 start.go:495] detecting cgroup driver to use...
	I0904 00:26:34.535804   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 00:26:34.591900   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0904 00:26:34.626628   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0904 00:26:34.639171   11080 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0904 00:26:34.639286   11080 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0904 00:26:34.652692   11080 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 00:26:34.664369   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 00:26:34.697886   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 00:26:34.730993   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 00:26:34.763798   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 00:26:34.796156   11080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 00:26:34.829332   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 00:26:34.863802   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 00:26:34.896669   11080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 00:26:34.929767   11080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 00:26:34.950960   11080 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0904 00:26:34.964050   11080 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0904 00:26:34.998024   11080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 00:26:35.047952   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:26:35.278372   11080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 00:26:35.342492   11080 start.go:495] detecting cgroup driver to use...
	I0904 00:26:35.353493   11080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0904 00:26:35.397487   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 00:26:35.433432   11080 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 00:26:35.477531   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 00:26:35.516523   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 00:26:35.553476   11080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 00:26:35.623758   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 00:26:35.650744   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 00:26:35.700526   11080 ssh_runner.go:195] Run: which cri-dockerd
	I0904 00:26:35.718007   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0904 00:26:35.738743   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0904 00:26:35.789800   11080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0904 00:26:36.034979   11080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0904 00:26:36.268919   11080 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0904 00:26:36.269018   11080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0904 00:26:36.320780   11080 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0904 00:26:36.358186   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:26:36.597109   11080 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 00:26:37.446377   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0904 00:26:37.488739   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0904 00:26:37.530024   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 00:26:37.572932   11080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0904 00:26:37.821134   11080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0904 00:26:38.071084   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:26:38.311393   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0904 00:26:38.376687   11080 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0904 00:26:38.413818   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:26:38.652518   11080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0904 00:26:38.812387   11080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 00:26:38.847680   11080 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0904 00:26:38.860548   11080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0904 00:26:38.871623   11080 start.go:563] Will wait 60s for crictl version
	I0904 00:26:38.884332   11080 ssh_runner.go:195] Run: which crictl
	I0904 00:26:38.901990   11080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 00:26:38.959738   11080 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.3.2
	RuntimeApiVersion:  v1
	I0904 00:26:38.970716   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 00:26:39.013976   11080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 00:26:39.055296   11080 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.3.2 ...
	I0904 00:26:39.057885   11080 out.go:179]   - env NO_PROXY=172.25.112.78
	I0904 00:26:39.059874   11080 ip.go:180] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0904 00:26:39.064458   11080 ip.go:194] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0904 00:26:39.064458   11080 ip.go:194] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0904 00:26:39.064458   11080 ip.go:189] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0904 00:26:39.064458   11080 ip.go:215] Found interface: {Index:12 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:2e:33 Flags:up|broadcast|multicast|running}
	I0904 00:26:39.067479   11080 ip.go:218] interface addr: fe80::b536:5e95:cebf:bd87/64
	I0904 00:26:39.067479   11080 ip.go:218] interface addr: 172.25.112.1/20
	I0904 00:26:39.081939   11080 ssh_runner.go:195] Run: grep 172.25.112.1	host.minikube.internal$ /etc/hosts
	I0904 00:26:39.088432   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 00:26:39.114372   11080 mustload.go:65] Loading cluster: multinode-477700
	I0904 00:26:39.115044   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:26:39.115707   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:26:41.182183   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:41.182183   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:41.182183   11080 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:26:41.183973   11080 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-477700 for IP: 172.25.123.14
	I0904 00:26:41.183973   11080 certs.go:194] generating shared ca certs ...
	I0904 00:26:41.184091   11080 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 00:26:41.184731   11080 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0904 00:26:41.185078   11080 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0904 00:26:41.185078   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0904 00:26:41.186512   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0904 00:26:41.186512   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0904 00:26:41.186512   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0904 00:26:41.187198   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem (1338 bytes)
	W0904 00:26:41.187970   11080 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220_empty.pem, impossibly tiny 0 bytes
	I0904 00:26:41.188062   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0904 00:26:41.188419   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0904 00:26:41.188644   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0904 00:26:41.188866   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0904 00:26:41.189434   11080 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem (1708 bytes)
	I0904 00:26:41.189613   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem -> /usr/share/ca-certificates/2220.pem
	I0904 00:26:41.189613   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem -> /usr/share/ca-certificates/22202.pem
	I0904 00:26:41.189613   11080 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:26:41.190432   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 00:26:41.247241   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0904 00:26:41.300774   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 00:26:41.352486   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0904 00:26:41.407183   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\2220.pem --> /usr/share/ca-certificates/2220.pem (1338 bytes)
	I0904 00:26:41.459914   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\22202.pem --> /usr/share/ca-certificates/22202.pem (1708 bytes)
	I0904 00:26:41.511069   11080 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 00:26:41.581327   11080 ssh_runner.go:195] Run: openssl version
	I0904 00:26:41.602540   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 00:26:41.635439   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:26:41.642285   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:20 /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:26:41.654910   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 00:26:41.678096   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 00:26:41.712973   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2220.pem && ln -fs /usr/share/ca-certificates/2220.pem /etc/ssl/certs/2220.pem"
	I0904 00:26:41.748197   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2220.pem
	I0904 00:26:41.755561   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:37 /usr/share/ca-certificates/2220.pem
	I0904 00:26:41.768461   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2220.pem
	I0904 00:26:41.789473   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2220.pem /etc/ssl/certs/51391683.0"
	I0904 00:26:41.822692   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22202.pem && ln -fs /usr/share/ca-certificates/22202.pem /etc/ssl/certs/22202.pem"
	I0904 00:26:41.857824   11080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22202.pem
	I0904 00:26:41.865244   11080 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:37 /usr/share/ca-certificates/22202.pem
	I0904 00:26:41.878723   11080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22202.pem
	I0904 00:26:41.903282   11080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22202.pem /etc/ssl/certs/3ec20f2e.0"
	I0904 00:26:41.936848   11080 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 00:26:41.944842   11080 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 00:26:41.944842   11080 kubeadm.go:926] updating node {m02 172.25.123.14 8443 v1.34.0 docker false true} ...
	I0904 00:26:41.944842   11080 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-477700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.123.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 00:26:41.959547   11080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0904 00:26:41.980487   11080 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 00:26:41.993133   11080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0904 00:26:42.014364   11080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0904 00:26:42.059353   11080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 00:26:42.123704   11080 ssh_runner.go:195] Run: grep 172.25.112.78	control-plane.minikube.internal$ /etc/hosts
	I0904 00:26:42.130834   11080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.112.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 00:26:42.175966   11080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 00:26:42.423039   11080 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 00:26:42.482889   11080 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:26:42.484394   11080 start.go:317] joinCluster: &{Name:multinode-477700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:multinode-477700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.112.78 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.123.14 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.125.123 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 00:26:42.484586   11080 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.25.123.14 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0904 00:26:42.484645   11080 host.go:66] Checking if "multinode-477700-m02" exists ...
	I0904 00:26:42.485266   11080 mustload.go:65] Loading cluster: multinode-477700
	I0904 00:26:42.485585   11080 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:26:42.486378   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:26:44.650825   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:44.651338   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:44.651338   11080 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:26:44.652160   11080 api_server.go:166] Checking apiserver status ...
	I0904 00:26:44.663203   11080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:26:44.663203   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:26:46.826624   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:46.827600   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:46.827983   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:49.320019   11080 main.go:141] libmachine: [stdout =====>] : 172.25.112.78
	
	I0904 00:26:49.320019   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:49.321510   11080 sshutil.go:53] new ssh client: &{IP:172.25.112.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:26:49.450572   11080 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.7873042s)
	I0904 00:26:49.464391   11080 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2455/cgroup
	W0904 00:26:49.484842   11080 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2455/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 00:26:49.496635   11080 ssh_runner.go:195] Run: ls
	I0904 00:26:49.507262   11080 api_server.go:253] Checking apiserver healthz at https://172.25.112.78:8443/healthz ...
	I0904 00:26:49.518018   11080 api_server.go:279] https://172.25.112.78:8443/healthz returned 200:
	ok
	I0904 00:26:49.529876   11080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl drain multinode-477700-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0904 00:26:52.735999   11080 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl drain multinode-477700-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.2060797s)
	I0904 00:26:52.735999   11080 node.go:128] successfully drained node "multinode-477700-m02"
	I0904 00:26:52.735999   11080 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0904 00:26:52.735999   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:26:54.812706   11080 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:26:54.813661   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:54.813735   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:26:57.302324   11080 main.go:141] libmachine: [stdout =====>] : 172.25.123.14
	
	I0904 00:26:57.303046   11080 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:26:57.303486   11080 sshutil.go:53] new ssh client: &{IP:172.25.123.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:26:58.139628   11080 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.4035561s)
	I0904 00:26:58.139628   11080 node.go:155] successfully reset node "multinode-477700-m02"
	I0904 00:26:58.141409   11080 kapi.go:59] client config for multinode-477700: &rest.Config{Host:"https://172.25.112.78:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-477700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24e0580), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0904 00:26:58.143070   11080 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0904 00:26:58.163813   11080 node.go:180] successfully deleted node "multinode-477700-m02"
	I0904 00:26:58.163813   11080 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.25.123.14 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0904 00:26:58.164803   11080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0904 00:26:58.164803   11080 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	
	
	==> Docker <==
	Sep 04 00:23:48 multinode-477700 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Sep 04 00:23:48 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:48Z" level=info msg="Starting cri-dockerd 0.4.0 (b9b8893)"
	Sep 04 00:23:48 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:48Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Sep 04 00:23:48 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:48Z" level=info msg="Start docker client with request timeout 0s"
	Sep 04 00:23:48 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:48Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 04 00:23:48 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:48Z" level=info msg="Loaded network plugin cni"
	Sep 04 00:23:48 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:48Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 04 00:23:48 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:48Z" level=info msg="Setting cgroupDriver cgroupfs"
	Sep 04 00:23:48 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 04 00:23:48 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:48Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 04 00:23:48 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:48Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 04 00:23:48 multinode-477700 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 04 00:23:54 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-bj95n_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"076e3b0b4e95f7f9aa733bf01a48e77770208afcd20307559262a179e3dcd165\""
	Sep 04 00:23:54 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:54Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-mg9nc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"882d6e338723d7cd04e223a8df9093f1e5b39a41416a7bdb7104487e3061a0e8\""
	Sep 04 00:23:55 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a2c5f7f77a6e1aed584225a156db98b89dc8aa8f3b0ecddeeeec80dc2b0f1c96/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:23:56 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/523173d7c2c5625ce091fcab71353c919b547de9a72b9961ce83c3cfb564fcef/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:23:56 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/71bfe0b080a490ea3c059e519af35ecedf3b6dcfc3321179c9859643d657290e/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:23:56 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:23:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1d95e67298484f6371c1da07e47374e50dece7f43f5c8611f6da7d8535e027a3/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:24:01 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:24:01Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 04 00:24:03 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:24:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f69e1165510ad55bf6e5729ddc8349a2027ad4e0d41ab26d9b8c37ad258c698e/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:24:03 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:24:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7802ec6bf5b88ac9bfbec7fa8e44172a9c4568a155c389e56c4b1cd386077204/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:24:04 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:24:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/146a3b268f82b7995c77ccdced86b69173afcc9ac77f794259d53035f84fd6c3/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:24:35 multinode-477700 dockerd[1328]: time="2025-09-04T00:24:35.353553833Z" level=info msg="ignoring event" container=aceaa96fccb3c66803bfd8dc22890521eaa3dc9b98fbfb61c6c8a5dc6cdc2028 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 00:25:07 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:25:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e058d6e1bb03c20de2640df0f19d059b241d2185dd016d04355950fd02e6c5e2/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 00:25:07 multinode-477700 cri-dockerd[1697]: time="2025-09-04T00:25:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b10c9804048982fbd89cb30c7b8ede6350c32f0f8e28eaebc354feb4efad1540/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd96ca3ab8ab3       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   b10c980404898       busybox-7b57f96db7-bj95n
	9f891a1bb8c0a       52546a367cc9e                                                                                         2 minutes ago       Running             coredns                   1                   e058d6e1bb03c       coredns-66bc5c9577-mg9nc
	b29aa5acf82f0       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   f69e1165510ad       storage-provisioner
	839b11ace7294       409467f978b4a                                                                                         3 minutes ago       Running             kindnet-cni               1                   146a3b268f82b       kindnet-gdpss
	f6b893b647d6d       df0860106674d                                                                                         3 minutes ago       Running             kube-proxy                1                   7802ec6bf5b88       kube-proxy-v9bfx
	aceaa96fccb3c       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   f69e1165510ad       storage-provisioner
	6481be0a1a3c1       46169d968e920                                                                                         3 minutes ago       Running             kube-scheduler            1                   1d95e67298484       kube-scheduler-multinode-477700
	71e03d0d5e9c1       90550c43ad2bc                                                                                         3 minutes ago       Running             kube-apiserver            0                   71bfe0b080a49       kube-apiserver-multinode-477700
	d4982b2d6f022       5f1f5298c888d                                                                                         3 minutes ago       Running             etcd                      0                   523173d7c2c56       etcd-multinode-477700
	054771bfec63e       a0af72f2ec6d6                                                                                         3 minutes ago       Running             kube-controller-manager   1                   a2c5f7f77a6e1       kube-controller-manager-multinode-477700
	316321453cf2b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   076e3b0b4e95f       busybox-7b57f96db7-bj95n
	89b7640b7697a       52546a367cc9e                                                                                         26 minutes ago      Exited              coredns                   0                   882d6e338723d       coredns-66bc5c9577-mg9nc
	3dd1de2460602       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              27 minutes ago      Exited              kindnet-cni               0                   4c1d437a10c4c       kindnet-gdpss
	a5c4aad9ef6fa       df0860106674d                                                                                         27 minutes ago      Exited              kube-proxy                0                   71185e7e5e3a7       kube-proxy-v9bfx
	944ecb4902689       a0af72f2ec6d6                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   9b5837c04c52b       kube-controller-manager-multinode-477700
	2b011dd581a49       46169d968e920                                                                                         27 minutes ago      Exited              kube-scheduler            0                   e2706c7084c7d       kube-scheduler-multinode-477700
	
	
	==> coredns [89b7640b7697] <==
	[INFO] 10.244.1.2:32790 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074001s
	[INFO] 10.244.1.2:53178 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000378305s
	[INFO] 10.244.1.2:44826 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104201s
	[INFO] 10.244.1.2:59967 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000168702s
	[INFO] 10.244.1.2:36824 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000274004s
	[INFO] 10.244.1.2:56069 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092801s
	[INFO] 10.244.1.2:42000 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095701s
	[INFO] 10.244.0.3:60492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234304s
	[INFO] 10.244.0.3:49587 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128502s
	[INFO] 10.244.0.3:41537 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000614908s
	[INFO] 10.244.0.3:41562 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059901s
	[INFO] 10.244.1.2:33339 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141102s
	[INFO] 10.244.1.2:37904 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153002s
	[INFO] 10.244.1.2:43813 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107001s
	[INFO] 10.244.1.2:36152 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169902s
	[INFO] 10.244.0.3:59535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139802s
	[INFO] 10.244.0.3:56781 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176202s
	[INFO] 10.244.0.3:40076 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000151002s
	[INFO] 10.244.0.3:43241 - 5 "PTR IN 1.112.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000274004s
	[INFO] 10.244.1.2:46944 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000268504s
	[INFO] 10.244.1.2:35091 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000126902s
	[INFO] 10.244.1.2:40051 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135702s
	[INFO] 10.244.1.2:46583 - 5 "PTR IN 1.112.25.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069501s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9f891a1bb8c0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6a91ebc4603c280fffd96028976a93bc50f334c3ff12031fdaf482f119377dc83ef299e1deb76633d43d96f71e1d16982cd22dedeb608e78281d30e2ecaef945
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53462 - 36678 "HINFO IN 3591618350796670836.5643319704878847982. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.059594321s
	
	
	==> describe nodes <==
	Name:               multinode-477700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-477700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=multinode-477700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_04T00_00_06_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 00:00:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-477700
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 00:27:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 04 Sep 2025 00:24:48 +0000   Wed, 03 Sep 2025 23:59:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 04 Sep 2025 00:24:48 +0000   Wed, 03 Sep 2025 23:59:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 04 Sep 2025 00:24:48 +0000   Wed, 03 Sep 2025 23:59:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 04 Sep 2025 00:24:48 +0000   Thu, 04 Sep 2025 00:24:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.112.78
	  Hostname:    multinode-477700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 8913745277ce452fa8cbf6a1a15e6b6e
	  System UUID:                ce975b69-0775-4046-ad71-2f0d48df367a
	  Boot ID:                    15d0b7e5-006a-490b-bd18-1117dab9032a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-bj95n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-66bc5c9577-mg9nc                    100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     27m
	  kube-system                 etcd-multinode-477700                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         3m21s
	  kube-system                 kindnet-gdpss                               100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      27m
	  kube-system                 kube-apiserver-multinode-477700             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m21s
	  kube-system                 kube-controller-manager-multinode-477700    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-v9bfx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-multinode-477700             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (7%)  220Mi (7%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m17s                  kube-proxy       
	  Normal   Starting                 27m                    kube-proxy       
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27m                    kubelet          Node multinode-477700 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m                    kubelet          Node multinode-477700 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m                    kubelet          Node multinode-477700 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27m                    node-controller  Node multinode-477700 event: Registered Node multinode-477700 in Controller
	  Normal   NodeReady                26m                    kubelet          Node multinode-477700 status is now: NodeReady
	  Normal   Starting                 3m29s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m28s (x8 over 3m29s)  kubelet          Node multinode-477700 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m28s (x8 over 3m29s)  kubelet          Node multinode-477700 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m28s (x7 over 3m29s)  kubelet          Node multinode-477700 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m22s                  kubelet          Node multinode-477700 has been rebooted, boot id: 15d0b7e5-006a-490b-bd18-1117dab9032a
	  Normal   RegisteredNode           3m19s                  node-controller  Node multinode-477700 event: Registered Node multinode-477700 in Controller
	
	
	Name:               multinode-477700-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-477700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=multinode-477700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_04T00_19_22_0700
	                    minikube.k8s.io/version=v1.36.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 04 Sep 2025 00:19:22 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-477700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 04 Sep 2025 00:20:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 04 Sep 2025 00:19:39 +0000   Thu, 04 Sep 2025 00:21:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 04 Sep 2025 00:19:39 +0000   Thu, 04 Sep 2025 00:21:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 04 Sep 2025 00:19:39 +0000   Thu, 04 Sep 2025 00:21:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 04 Sep 2025 00:19:39 +0000   Thu, 04 Sep 2025 00:21:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.25.125.123
	  Hostname:    multinode-477700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2976488Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ab3bbdec9e54ca58b750b49b272a5bf
	  System UUID:                458c760a-8c79-3642-892c-2349e4b3ba6b
	  Boot ID:                    cb113028-3d51-4c30-90af-697da703a099
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.3.2
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gj9bp       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      19m
	  kube-system                 kube-proxy-rbxm9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 19m                  kube-proxy       
	  Normal  Starting                 7m58s                kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x3 over 19m)    kubelet          Node multinode-477700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x3 over 19m)    kubelet          Node multinode-477700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x3 over 19m)    kubelet          Node multinode-477700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                  kubelet          Node multinode-477700-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  8m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m1s (x3 over 8m2s)  kubelet          Node multinode-477700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m1s (x3 over 8m2s)  kubelet          Node multinode-477700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m1s (x3 over 8m2s)  kubelet          Node multinode-477700-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m58s                node-controller  Node multinode-477700-m03 event: Registered Node multinode-477700-m03 in Controller
	  Normal  NodeReady                7m44s                kubelet          Node multinode-477700-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m7s                 node-controller  Node multinode-477700-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m19s                node-controller  Node multinode-477700-m03 event: Registered Node multinode-477700-m03 in Controller
	
	
	==> dmesg <==
	[Sep 4 00:22] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000001] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +0.002276] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.000014] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001383] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +0.888433] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.000056] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008910] (rpcbind)[114]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.327731] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 4 00:23] kauditd_printk_skb: 144 callbacks suppressed
	[  +0.146903] kauditd_printk_skb: 259 callbacks suppressed
	[Sep 4 00:24] kauditd_printk_skb: 159 callbacks suppressed
	[ +29.683888] kauditd_printk_skb: 164 callbacks suppressed
	[ +12.720822] kauditd_printk_skb: 13 callbacks suppressed
	[Sep 4 00:25] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [d4982b2d6f02] <==
	{"level":"warn","ts":"2025-09-04T00:23:59.832541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:23:59.843288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:23:59.868317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:23:59.885237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:23:59.888144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:23:59.905176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:23:59.919279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:23:59.934164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:23:59.947277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:23:59.961946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:23:59.980739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.017521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.047545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.059086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.081073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.102492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.114269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.126723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.140388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.152544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.176248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.183794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.203057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.212303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T00:24:00.317215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56022","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:27:23 up 5 min,  0 users,  load average: 0.87, 0.72, 0.32
	Linux multinode-477700 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [3dd1de246060] <==
	I0904 00:20:51.757875       1 main.go:324] Node multinode-477700-m03 has CIDR [10.244.3.0/24] 
	I0904 00:21:01.748825       1 main.go:297] Handling node with IPs: map[172.25.126.63:{}]
	I0904 00:21:01.748859       1 main.go:301] handling current node
	I0904 00:21:01.748875       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:21:01.748882       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:21:01.749282       1 main.go:297] Handling node with IPs: map[172.25.125.123:{}]
	I0904 00:21:01.749365       1 main.go:324] Node multinode-477700-m03 has CIDR [10.244.3.0/24] 
	I0904 00:21:11.748540       1 main.go:297] Handling node with IPs: map[172.25.126.63:{}]
	I0904 00:21:11.748576       1 main.go:301] handling current node
	I0904 00:21:11.748593       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:21:11.748599       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:21:11.751126       1 main.go:297] Handling node with IPs: map[172.25.125.123:{}]
	I0904 00:21:11.751147       1 main.go:324] Node multinode-477700-m03 has CIDR [10.244.3.0/24] 
	I0904 00:21:21.751548       1 main.go:297] Handling node with IPs: map[172.25.126.63:{}]
	I0904 00:21:21.751701       1 main.go:301] handling current node
	I0904 00:21:21.751903       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:21:21.752164       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:21:21.752678       1 main.go:297] Handling node with IPs: map[172.25.125.123:{}]
	I0904 00:21:21.752815       1 main.go:324] Node multinode-477700-m03 has CIDR [10.244.3.0/24] 
	I0904 00:21:31.756375       1 main.go:297] Handling node with IPs: map[172.25.126.63:{}]
	I0904 00:21:31.756409       1 main.go:301] handling current node
	I0904 00:21:31.756427       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:21:31.756433       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:21:31.757320       1 main.go:297] Handling node with IPs: map[172.25.125.123:{}]
	I0904 00:21:31.757427       1 main.go:324] Node multinode-477700-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [839b11ace729] <==
	I0904 00:26:36.039518       1 main.go:301] handling current node
	I0904 00:26:36.039541       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:26:36.039550       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:26:36.040178       1 main.go:297] Handling node with IPs: map[172.25.125.123:{}]
	I0904 00:26:36.040309       1 main.go:324] Node multinode-477700-m03 has CIDR [10.244.3.0/24] 
	I0904 00:26:46.039968       1 main.go:297] Handling node with IPs: map[172.25.125.123:{}]
	I0904 00:26:46.040058       1 main.go:324] Node multinode-477700-m03 has CIDR [10.244.3.0/24] 
	I0904 00:26:46.040385       1 main.go:297] Handling node with IPs: map[172.25.112.78:{}]
	I0904 00:26:46.040471       1 main.go:301] handling current node
	I0904 00:26:46.040490       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:26:46.040497       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:26:56.039764       1 main.go:297] Handling node with IPs: map[172.25.112.78:{}]
	I0904 00:26:56.039871       1 main.go:301] handling current node
	I0904 00:26:56.039890       1 main.go:297] Handling node with IPs: map[172.25.125.181:{}]
	I0904 00:26:56.039898       1 main.go:324] Node multinode-477700-m02 has CIDR [10.244.1.0/24] 
	I0904 00:26:56.040120       1 main.go:297] Handling node with IPs: map[172.25.125.123:{}]
	I0904 00:26:56.040189       1 main.go:324] Node multinode-477700-m03 has CIDR [10.244.3.0/24] 
	I0904 00:27:06.053134       1 main.go:297] Handling node with IPs: map[172.25.112.78:{}]
	I0904 00:27:06.053248       1 main.go:301] handling current node
	I0904 00:27:06.053282       1 main.go:297] Handling node with IPs: map[172.25.125.123:{}]
	I0904 00:27:06.053299       1 main.go:324] Node multinode-477700-m03 has CIDR [10.244.3.0/24] 
	I0904 00:27:16.040159       1 main.go:297] Handling node with IPs: map[172.25.112.78:{}]
	I0904 00:27:16.040543       1 main.go:301] handling current node
	I0904 00:27:16.040565       1 main.go:297] Handling node with IPs: map[172.25.125.123:{}]
	I0904 00:27:16.040573       1 main.go:324] Node multinode-477700-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [71e03d0d5e9c] <==
	I0904 00:24:01.213561       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0904 00:24:01.238729       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0904 00:24:01.241649       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0904 00:24:01.243423       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0904 00:24:01.259644       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0904 00:24:01.260134       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0904 00:24:01.260314       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0904 00:24:01.261545       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I0904 00:24:01.273906       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0904 00:24:01.291191       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 00:24:01.298558       1 cache.go:39] Caches are synced for autoregister controller
	I0904 00:24:02.043847       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0904 00:24:02.507918       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.112.78]
	I0904 00:24:02.510109       1 controller.go:667] quota admission added evaluator for: endpoints
	I0904 00:24:02.520388       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 00:24:02.906541       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0904 00:24:04.759620       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0904 00:24:04.786649       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0904 00:24:05.052288       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0904 00:24:05.444280       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0904 00:24:05.460951       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0904 00:25:08.424145       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 00:25:16.421436       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 00:26:09.030117       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0904 00:26:42.751637       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [054771bfec63] <==
	I0904 00:24:04.661840       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0904 00:24:04.664185       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0904 00:24:04.665337       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0904 00:24:04.673852       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0904 00:24:04.676407       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 00:24:04.677212       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0904 00:24:04.677611       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0904 00:24:04.679371       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0904 00:24:04.659968       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0904 00:24:04.677268       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0904 00:24:04.697347       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0904 00:24:04.708314       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0904 00:24:04.709119       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0904 00:24:04.709205       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	I0904 00:24:04.725414       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 00:24:04.733975       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-477700-m03"
	I0904 00:24:04.735608       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-477700"
	I0904 00:24:04.735660       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-477700-m02"
	I0904 00:24:04.752111       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0904 00:24:04.826755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 00:24:04.855919       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 00:24:04.856416       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0904 00:24:04.857179       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 00:24:48.648807       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	E0904 00:27:04.660604       1 gc_controller.go:151] "Failed to get node" err="node \"multinode-477700-m02\" not found" logger="pod-garbage-collector-controller" node="multinode-477700-m02"
	
	
	==> kube-controller-manager [944ecb490268] <==
	I0904 00:00:10.433454       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0904 00:00:10.439642       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0904 00:00:10.441791       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 00:00:10.447313       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0904 00:00:10.449078       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0904 00:00:10.473317       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 00:00:10.473409       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0904 00:00:10.473418       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0904 00:00:35.432314       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0904 00:03:20.421389       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-477700-m02\" does not exist"
	I0904 00:03:20.468745       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-477700-m02"
	I0904 00:03:20.470117       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-477700-m02" podCIDRs=["10.244.1.0/24"]
	I0904 00:03:53.619658       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	I0904 00:08:11.744880       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	I0904 00:08:11.747327       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-477700-m03\" does not exist"
	I0904 00:08:11.805811       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-477700-m03" podCIDRs=["10.244.2.0/24"]
	I0904 00:08:15.558536       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-477700-m03"
	I0904 00:08:44.763758       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	I0904 00:16:55.710368       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	I0904 00:19:15.695123       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	I0904 00:19:22.040250       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-477700-m03\" does not exist"
	I0904 00:19:22.040517       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	I0904 00:19:22.064777       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-477700-m03" podCIDRs=["10.244.3.0/24"]
	I0904 00:19:39.315522       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	I0904 00:21:16.155517       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-477700-m02"
	
	
	==> kube-proxy [a5c4aad9ef6f] <==
	I0904 00:00:12.868323       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 00:00:12.971779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 00:00:12.972017       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.25.126.63"]
	E0904 00:00:12.972390       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 00:00:13.076726       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0904 00:00:13.076871       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 00:00:13.076904       1 server_linux.go:132] "Using iptables Proxier"
	I0904 00:00:13.095558       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 00:00:13.096639       1 server.go:527] "Version info" version="v1.34.0"
	I0904 00:00:13.096932       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 00:00:13.104702       1 config.go:309] "Starting node config controller"
	I0904 00:00:13.104967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 00:00:13.105246       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 00:00:13.106085       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 00:00:13.106209       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 00:00:13.106393       1 config.go:200] "Starting service config controller"
	I0904 00:00:13.106402       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 00:00:13.106417       1 config.go:106] "Starting endpoint slice config controller"
	I0904 00:00:13.106489       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 00:00:13.207252       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 00:00:13.207298       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 00:00:13.207304       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [f6b893b647d6] <==
	I0904 00:24:05.789383       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 00:24:05.894787       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 00:24:05.895091       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.25.112.78"]
	E0904 00:24:05.895312       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 00:24:06.059313       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0904 00:24:06.060132       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 00:24:06.060270       1 server_linux.go:132] "Using iptables Proxier"
	I0904 00:24:06.077423       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 00:24:06.078592       1 server.go:527] "Version info" version="v1.34.0"
	I0904 00:24:06.079455       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 00:24:06.085385       1 config.go:200] "Starting service config controller"
	I0904 00:24:06.085480       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 00:24:06.085512       1 config.go:106] "Starting endpoint slice config controller"
	I0904 00:24:06.085516       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 00:24:06.085528       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 00:24:06.085532       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 00:24:06.086878       1 config.go:309] "Starting node config controller"
	I0904 00:24:06.086893       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 00:24:06.086900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 00:24:06.185931       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 00:24:06.185955       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0904 00:24:06.185952       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2b011dd581a4] <==
	E0904 00:00:03.306674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 00:00:03.471678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 00:00:03.523287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 00:00:03.602845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 00:00:03.633875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0904 00:00:03.679977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0904 00:00:03.696864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 00:00:03.752294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 00:00:03.762244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 00:00:03.778242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 00:00:03.808584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0904 00:00:03.826459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 00:00:03.831716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 00:00:03.939748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 00:00:03.965874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 00:00:03.985838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 00:00:04.013641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 00:00:04.046466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I0904 00:00:06.221474       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 00:21:32.792724       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0904 00:21:32.797999       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0904 00:21:32.801559       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0904 00:21:32.801582       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 00:21:32.801607       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0904 00:21:32.801804       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6481be0a1a3c] <==
	I0904 00:23:58.283568       1 serving.go:386] Generated self-signed cert in-memory
	W0904 00:24:01.190925       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0904 00:24:01.191364       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0904 00:24:01.191383       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0904 00:24:01.191390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0904 00:24:01.305507       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0904 00:24:01.306093       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 00:24:01.309556       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 00:24:01.309929       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0904 00:24:01.312277       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0904 00:24:01.312694       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0904 00:24:01.411185       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 00:24:34 multinode-477700 kubelet[2080]: E0904 00:24:34.566411    2080 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39d4fb7b-1473-4a4e-9fb1-ce058a1c4904-config-volume podName:39d4fb7b-1473-4a4e-9fb1-ce058a1c4904 nodeName:}" failed. No retries permitted until 2025-09-04 00:25:06.566394156 +0000 UTC m=+71.971783413 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/39d4fb7b-1473-4a4e-9fb1-ce058a1c4904-config-volume") pod "coredns-66bc5c9577-mg9nc" (UID: "39d4fb7b-1473-4a4e-9fb1-ce058a1c4904") : object "kube-system"/"coredns" not registered
	Sep 04 00:24:34 multinode-477700 kubelet[2080]: E0904 00:24:34.667773    2080 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 04 00:24:34 multinode-477700 kubelet[2080]: E0904 00:24:34.667812    2080 projected.go:196] Error preparing data for projected volume kube-api-access-mp9gr for pod default/busybox-7b57f96db7-bj95n: object "default"/"kube-root-ca.crt" not registered
	Sep 04 00:24:34 multinode-477700 kubelet[2080]: E0904 00:24:34.668113    2080 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ac851b87-114b-409b-b27f-575f9243a270-kube-api-access-mp9gr podName:ac851b87-114b-409b-b27f-575f9243a270 nodeName:}" failed. No retries permitted until 2025-09-04 00:25:06.668086485 +0000 UTC m=+72.073475842 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-mp9gr" (UniqueName: "kubernetes.io/projected/ac851b87-114b-409b-b27f-575f9243a270-kube-api-access-mp9gr") pod "busybox-7b57f96db7-bj95n" (UID: "ac851b87-114b-409b-b27f-575f9243a270") : object "default"/"kube-root-ca.crt" not registered
	Sep 04 00:24:34 multinode-477700 kubelet[2080]: E0904 00:24:34.896724    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-bj95n" podUID="ac851b87-114b-409b-b27f-575f9243a270"
	Sep 04 00:24:34 multinode-477700 kubelet[2080]: E0904 00:24:34.899076    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-mg9nc" podUID="39d4fb7b-1473-4a4e-9fb1-ce058a1c4904"
	Sep 04 00:24:36 multinode-477700 kubelet[2080]: I0904 00:24:36.349969    2080 scope.go:117] "RemoveContainer" containerID="cd3b66b73cb4bcabb17ac5a3f86470db6fa94f0e877895bbff20154398e45272"
	Sep 04 00:24:36 multinode-477700 kubelet[2080]: I0904 00:24:36.350612    2080 scope.go:117] "RemoveContainer" containerID="aceaa96fccb3c66803bfd8dc22890521eaa3dc9b98fbfb61c6c8a5dc6cdc2028"
	Sep 04 00:24:36 multinode-477700 kubelet[2080]: E0904 00:24:36.350774    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6ff776d2-685f-4111-bbe0-2d7f616fed2a)\"" pod="kube-system/storage-provisioner" podUID="6ff776d2-685f-4111-bbe0-2d7f616fed2a"
	Sep 04 00:24:36 multinode-477700 kubelet[2080]: E0904 00:24:36.897284    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-mg9nc" podUID="39d4fb7b-1473-4a4e-9fb1-ce058a1c4904"
	Sep 04 00:24:36 multinode-477700 kubelet[2080]: E0904 00:24:36.897770    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-bj95n" podUID="ac851b87-114b-409b-b27f-575f9243a270"
	Sep 04 00:24:38 multinode-477700 kubelet[2080]: E0904 00:24:38.896872    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-bj95n" podUID="ac851b87-114b-409b-b27f-575f9243a270"
	Sep 04 00:24:38 multinode-477700 kubelet[2080]: E0904 00:24:38.897258    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-mg9nc" podUID="39d4fb7b-1473-4a4e-9fb1-ce058a1c4904"
	Sep 04 00:24:40 multinode-477700 kubelet[2080]: E0904 00:24:40.896474    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-bj95n" podUID="ac851b87-114b-409b-b27f-575f9243a270"
	Sep 04 00:24:40 multinode-477700 kubelet[2080]: E0904 00:24:40.897373    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-mg9nc" podUID="39d4fb7b-1473-4a4e-9fb1-ce058a1c4904"
	Sep 04 00:24:42 multinode-477700 kubelet[2080]: E0904 00:24:42.896490    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-mg9nc" podUID="39d4fb7b-1473-4a4e-9fb1-ce058a1c4904"
	Sep 04 00:24:42 multinode-477700 kubelet[2080]: E0904 00:24:42.897482    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-bj95n" podUID="ac851b87-114b-409b-b27f-575f9243a270"
	Sep 04 00:24:44 multinode-477700 kubelet[2080]: E0904 00:24:44.896801    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-mg9nc" podUID="39d4fb7b-1473-4a4e-9fb1-ce058a1c4904"
	Sep 04 00:24:44 multinode-477700 kubelet[2080]: E0904 00:24:44.897217    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-bj95n" podUID="ac851b87-114b-409b-b27f-575f9243a270"
	Sep 04 00:24:46 multinode-477700 kubelet[2080]: E0904 00:24:46.896954    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-66bc5c9577-mg9nc" podUID="39d4fb7b-1473-4a4e-9fb1-ce058a1c4904"
	Sep 04 00:24:46 multinode-477700 kubelet[2080]: E0904 00:24:46.897900    2080 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7b57f96db7-bj95n" podUID="ac851b87-114b-409b-b27f-575f9243a270"
	Sep 04 00:24:47 multinode-477700 kubelet[2080]: I0904 00:24:47.896374    2080 scope.go:117] "RemoveContainer" containerID="aceaa96fccb3c66803bfd8dc22890521eaa3dc9b98fbfb61c6c8a5dc6cdc2028"
	Sep 04 00:24:48 multinode-477700 kubelet[2080]: I0904 00:24:48.630577    2080 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Sep 04 00:24:54 multinode-477700 kubelet[2080]: I0904 00:24:54.863104    2080 scope.go:117] "RemoveContainer" containerID="0545be46c0c92dcf55988abc1a4a1c36c47fab841c6a2266f102d04998523759"
	Sep 04 00:24:54 multinode-477700 kubelet[2080]: I0904 00:24:54.950516    2080 scope.go:117] "RemoveContainer" containerID="774d3869c70e5430622602eb3b74aa9573b4369c06684bd0fdcd71bc2829165e"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-477700 -n multinode-477700
helpers_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-477700 -n multinode-477700: (11.8373079s)
helpers_test.go:269: (dbg) Run:  kubectl --context multinode-477700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-ljmx2
helpers_test.go:282: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context multinode-477700 describe pod busybox-7b57f96db7-ljmx2
helpers_test.go:290: (dbg) kubectl --context multinode-477700 describe pod busybox-7b57f96db7-ljmx2:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-ljmx2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mtww8 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-mtww8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  47s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (442.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (53.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-686800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=hyperv
E0904 01:08:04.826881    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-686800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=hyperv: exit status 1 (19.8810425s)

                                                
                                                
-- stdout --
	* [NoKubernetes-686800] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-686800

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 01:07:50.701958    8784 out.go:360] Setting OutFile to fd 1880 ...
	I0904 01:07:50.773991    8784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 01:07:50.773991    8784 out.go:374] Setting ErrFile to fd 1740...
	I0904 01:07:50.773991    8784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 01:07:50.791971    8784 out.go:368] Setting JSON to false
	I0904 01:07:50.794974    8784 start.go:130] hostinfo: {"hostname":"minikube6","uptime":31175,"bootTime":1756916894,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0904 01:07:50.794974    8784 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0904 01:07:50.798971    8784 out.go:179] * [NoKubernetes-686800] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0904 01:07:50.801975    8784 notify.go:220] Checking for updates...
	I0904 01:07:50.804987    8784 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0904 01:07:50.806964    8784 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 01:07:50.809982    8784 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0904 01:07:50.812982    8784 out.go:179]   - MINIKUBE_LOCATION=21341
	I0904 01:07:50.816007    8784 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 01:07:50.818955    8784 config.go:182] Loaded profile config "NoKubernetes-686800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 01:07:50.820011    8784 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 01:07:50.820011    8784 start.go:1797] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0904 01:07:50.820011    8784 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 01:07:56.392623    8784 out.go:179] * Using the hyperv driver based on existing profile
	I0904 01:07:56.395247    8784 start.go:304] selected driver: hyperv
	I0904 01:07:56.395247    8784 start.go:918] validating driver "hyperv" against &{Name:NoKubernetes-686800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-686800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.124.184 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 01:07:56.395900    8784 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 01:07:56.448819    8784 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 01:07:56.448819    8784 cni.go:84] Creating CNI manager for ""
	I0904 01:07:56.448819    8784 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0904 01:07:56.448819    8784 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 01:07:56.449819    8784 start.go:348] cluster config:
	{Name:NoKubernetes-686800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-686800 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.124.184 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 01:07:56.449819    8784 iso.go:125] acquiring lock: {Name:mk966bde02eeea119c68f0830e579f0a83ec9e11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 01:07:56.454823    8784 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-686800
	I0904 01:07:56.458819    8784 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime docker
	W0904 01:07:56.511005    8784 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0904 01:07:56.511005    8784 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\NoKubernetes-686800\config.json ...
	I0904 01:07:56.514239    8784 start.go:360] acquireMachinesLock for NoKubernetes-686800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-686800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-686800 -n NoKubernetes-686800
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-686800 -n NoKubernetes-686800: (12.3556913s)
helpers_test.go:252: <<< TestNoKubernetes/serial/StartWithStopK8s FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-686800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-686800 logs -n 25: (8.4004369s)
helpers_test.go:260: TestNoKubernetes/serial/StartWithStopK8s logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                              ARGS                                                                                               │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p running-upgrade-858500 --memory=3072 --vm-driver=hyperv                                                                                                                                      │ minikube                  │ minikube6\jenkins │ v1.26.0 │ 04 Sep 25 00:50 GMT │ 04 Sep 25 00:57 GMT │
	│ delete  │ -p offline-docker-143600                                                                                                                                                                        │ offline-docker-143600     │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:52 UTC │ 04 Sep 25 00:52 UTC │
	│ start   │ -p pause-590700 --memory=3072 --install-addons=false --wait=all --driver=hyperv                                                                                                                 │ pause-590700              │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:52 UTC │ 04 Sep 25 01:00 UTC │
	│ stop    │ stopped-upgrade-326200 stop                                                                                                                                                                     │ minikube                  │ minikube6\jenkins │ v1.26.0 │ 04 Sep 25 00:53 GMT │ 04 Sep 25 00:54 GMT │
	│ start   │ -p stopped-upgrade-326200 --memory=3072 --alsologtostderr -v=1 --driver=hyperv                                                                                                                  │ stopped-upgrade-326200    │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:54 UTC │ 04 Sep 25 01:02 UTC │
	│ start   │ -p running-upgrade-858500 --memory=3072 --alsologtostderr -v=1 --driver=hyperv                                                                                                                  │ running-upgrade-858500    │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:57 UTC │ 04 Sep 25 01:03 UTC │
	│ start   │ -p kubernetes-upgrade-143600 --memory=3072 --kubernetes-version=v1.20.0 --driver=hyperv                                                                                                         │ kubernetes-upgrade-143600 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:59 UTC │                     │
	│ start   │ -p kubernetes-upgrade-143600 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=hyperv                                                                                  │ kubernetes-upgrade-143600 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 00:59 UTC │ 04 Sep 25 01:04 UTC │
	│ start   │ -p pause-590700 --alsologtostderr -v=1 --driver=hyperv                                                                                                                                          │ pause-590700              │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:00 UTC │ 04 Sep 25 01:05 UTC │
	│ mount   │ C:\Users\jenkins.minikube6:/minikube-host --profile stopped-upgrade-326200 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                        │ stopped-upgrade-326200    │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:01 UTC │                     │
	│ delete  │ -p stopped-upgrade-326200                                                                                                                                                                       │ stopped-upgrade-326200    │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:02 UTC │ 04 Sep 25 01:03 UTC │
	│ start   │ -p NoKubernetes-686800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv                                                                                                                │ NoKubernetes-686800       │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:03 UTC │                     │
	│ start   │ -p NoKubernetes-686800 --memory=3072 --alsologtostderr -v=5 --driver=hyperv                                                                                                                     │ NoKubernetes-686800       │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:03 UTC │ 04 Sep 25 01:07 UTC │
	│ mount   │ C:\Users\jenkins.minikube6:/minikube-host --profile running-upgrade-858500 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                        │ running-upgrade-858500    │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:03 UTC │                     │
	│ delete  │ -p running-upgrade-858500                                                                                                                                                                       │ running-upgrade-858500    │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:03 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-143600                                                                                                                                                                    │ kubernetes-upgrade-143600 │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:04 UTC │ 04 Sep 25 01:05 UTC │
	│ start   │ -p cert-expiration-460100 --memory=3072 --cert-expiration=3m --driver=hyperv                                                                                                                    │ cert-expiration-460100    │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:05 UTC │                     │
	│ pause   │ -p pause-590700 --alsologtostderr -v=5                                                                                                                                                          │ pause-590700              │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:05 UTC │ 04 Sep 25 01:05 UTC │
	│ start   │ -p cert-options-482800 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv │ cert-options-482800       │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:05 UTC │                     │
	│ unpause │ -p pause-590700 --alsologtostderr -v=5                                                                                                                                                          │ pause-590700              │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:05 UTC │ 04 Sep 25 01:05 UTC │
	│ pause   │ -p pause-590700 --alsologtostderr -v=5                                                                                                                                                          │ pause-590700              │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:05 UTC │ 04 Sep 25 01:06 UTC │
	│ delete  │ -p pause-590700 --alsologtostderr -v=5                                                                                                                                                          │ pause-590700              │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:06 UTC │ 04 Sep 25 01:06 UTC │
	│ delete  │ -p pause-590700                                                                                                                                                                                 │ pause-590700              │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:07 UTC │ 04 Sep 25 01:07 UTC │
	│ start   │ -p force-systemd-env-259400 --memory=3072 --alsologtostderr -v=5 --driver=hyperv                                                                                                                │ force-systemd-env-259400  │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:07 UTC │                     │
	│ start   │ -p NoKubernetes-686800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=hyperv                                                                                                     │ NoKubernetes-686800       │ minikube6\jenkins │ v1.36.0 │ 04 Sep 25 01:07 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/04 01:07:50
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 01:07:50.701958    8784 out.go:360] Setting OutFile to fd 1880 ...
	I0904 01:07:50.773991    8784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 01:07:50.773991    8784 out.go:374] Setting ErrFile to fd 1740...
	I0904 01:07:50.773991    8784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 01:07:50.791971    8784 out.go:368] Setting JSON to false
	I0904 01:07:50.794974    8784 start.go:130] hostinfo: {"hostname":"minikube6","uptime":31175,"bootTime":1756916894,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0904 01:07:50.794974    8784 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0904 01:07:50.798971    8784 out.go:179] * [NoKubernetes-686800] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0904 01:07:50.801975    8784 notify.go:220] Checking for updates...
	I0904 01:07:50.804987    8784 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0904 01:07:50.806964    8784 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 01:07:50.809982    8784 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0904 01:07:50.812982    8784 out.go:179]   - MINIKUBE_LOCATION=21341
	I0904 01:07:50.816007    8784 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 01:07:46.808038   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-460100 ).state
	I0904 01:07:49.066168   13004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 01:07:49.066168   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:07:49.066168   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-460100 ).networkadapters[0]).ipaddresses[0]
	I0904 01:07:50.818955    8784 config.go:182] Loaded profile config "NoKubernetes-686800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 01:07:50.820011    8784 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 01:07:50.820011    8784 start.go:1797] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0904 01:07:50.820011    8784 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 01:07:51.788572   13004 main.go:141] libmachine: [stdout =====>] : 172.25.123.197
	
	I0904 01:07:51.788638   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:07:51.788703   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-460100 ).state
	I0904 01:07:54.036333   13004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 01:07:54.036333   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:07:54.036333   13004 machine.go:93] provisionDockerMachine start ...
	I0904 01:07:54.037640   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-460100 ).state
	I0904 01:07:56.392623    8784 out.go:179] * Using the hyperv driver based on existing profile
	I0904 01:07:56.395247    8784 start.go:304] selected driver: hyperv
	I0904 01:07:56.395247    8784 start.go:918] validating driver "hyperv" against &{Name:NoKubernetes-686800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-686800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.124.184 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 01:07:56.395900    8784 start.go:929] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 01:07:56.448819    8784 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 01:07:56.448819    8784 cni.go:84] Creating CNI manager for ""
	I0904 01:07:56.448819    8784 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0904 01:07:56.448819    8784 start.go:1892] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0904 01:07:56.449819    8784 start.go:348] cluster config:
	{Name:NoKubernetes-686800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-686800 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.124.184 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 01:07:56.449819    8784 iso.go:125] acquiring lock: {Name:mk966bde02eeea119c68f0830e579f0a83ec9e11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 01:07:56.454823    8784 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-686800
	I0904 01:07:56.458819    8784 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime docker
	W0904 01:07:56.511005    8784 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0904 01:07:56.511005    8784 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\NoKubernetes-686800\config.json ...
	I0904 01:07:56.514239    8784 start.go:360] acquireMachinesLock for NoKubernetes-686800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 01:07:56.182994   13004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 01:07:56.182994   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:07:56.183463   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-460100 ).networkadapters[0]).ipaddresses[0]
	I0904 01:07:59.189394   13004 main.go:141] libmachine: [stdout =====>] : 172.25.123.197
	
	I0904 01:07:59.189394   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:07:59.195274   13004 main.go:141] libmachine: Using SSH client type: native
	I0904 01:07:59.196036   13004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.197 22 <nil> <nil>}
	I0904 01:07:59.196036   13004 main.go:141] libmachine: About to run SSH command:
	hostname
	I0904 01:07:59.326764   13004 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0904 01:07:59.326764   13004 buildroot.go:166] provisioning hostname "cert-expiration-460100"
	I0904 01:07:59.326834   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-460100 ).state
	I0904 01:08:01.373971   13004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 01:08:01.373971   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:08:01.373971   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-460100 ).networkadapters[0]).ipaddresses[0]
	I0904 01:08:03.823052   13004 main.go:141] libmachine: [stdout =====>] : 172.25.123.197
	
	I0904 01:08:03.823052   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:08:03.829640   13004 main.go:141] libmachine: Using SSH client type: native
	I0904 01:08:03.830320   13004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.197 22 <nil> <nil>}
	I0904 01:08:03.830320   13004 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-460100 && echo "cert-expiration-460100" | sudo tee /etc/hostname
	I0904 01:08:03.996053   13004 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-460100
	
	I0904 01:08:03.996116   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-460100 ).state
	I0904 01:08:06.064474   13004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 01:08:06.064474   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:08:06.064775   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-460100 ).networkadapters[0]).ipaddresses[0]
	I0904 01:08:08.534931   13004 main.go:141] libmachine: [stdout =====>] : 172.25.123.197
	
	I0904 01:08:08.535206   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:08:08.540913   13004 main.go:141] libmachine: Using SSH client type: native
	I0904 01:08:08.540913   13004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abaa0] 0x7ae5e0 <nil>  [] 0s} 172.25.123.197 22 <nil> <nil>}
	I0904 01:08:08.540913   13004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-460100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-460100/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-460100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 01:08:08.698265   13004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 01:08:08.698265   13004 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0904 01:08:08.698265   13004 buildroot.go:174] setting up certificates
	I0904 01:08:08.698265   13004 provision.go:84] configureAuth start
	I0904 01:08:08.698358   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-460100 ).state
	I0904 01:08:10.830997   13004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 01:08:10.830997   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:08:10.831489   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-460100 ).networkadapters[0]).ipaddresses[0]
	I0904 01:08:13.437124   13004 main.go:141] libmachine: [stdout =====>] : 172.25.123.197
	
	I0904 01:08:13.437124   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:08:13.437403   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-460100 ).state
	I0904 01:08:15.576897   13004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 01:08:15.576897   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:08:15.577328   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-460100 ).networkadapters[0]).ipaddresses[0]
	I0904 01:08:18.155025   13004 main.go:141] libmachine: [stdout =====>] : 172.25.123.197
	
	I0904 01:08:18.155025   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:08:18.155025   13004 provision.go:143] copyHostCerts
	I0904 01:08:18.156600   13004 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0904 01:08:18.156600   13004 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0904 01:08:18.157304   13004 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0904 01:08:18.159483   13004 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0904 01:08:18.159483   13004 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0904 01:08:18.160047   13004 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0904 01:08:18.162583   13004 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0904 01:08:18.162583   13004 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0904 01:08:18.162849   13004 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0904 01:08:18.164288   13004 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-expiration-460100 san=[127.0.0.1 172.25.123.197 cert-expiration-460100 localhost minikube]
	I0904 01:08:18.619712   13004 provision.go:177] copyRemoteCerts
	I0904 01:08:18.633165   13004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 01:08:18.633268   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-460100 ).state
	I0904 01:08:20.801177   13004 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 01:08:20.801177   13004 main.go:141] libmachine: [stderr =====>] : 
	I0904 01:08:20.801177   13004 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-460100 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Sep 04 01:07:06 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:06.316309437Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	Sep 04 01:07:06 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:06.316482137Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Sep 04 01:07:06 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:06.316500737Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Sep 04 01:07:06 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:06.341121937Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Sep 04 01:07:06 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:06.817395737Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 04 01:07:07 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:07.780673437Z" level=info msg="Loading containers: start."
	Sep 04 01:07:08 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:08.004339037Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 04 01:07:08 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:08.143804737Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint_count 4b084e7d3a611c0407df8c09692f98305a1cd167734b28eab5532aa74d1b6c27], retrying...."
	Sep 04 01:07:08 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:08.243611737Z" level=info msg="Loading containers: done."
	Sep 04 01:07:08 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:08.267276937Z" level=info msg="Docker daemon" commit=e77ff99 containerd-snapshotter=false storage-driver=overlay2 version=28.3.2
	Sep 04 01:07:08 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:08.267443537Z" level=info msg="Initializing buildkit"
	Sep 04 01:07:08 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:08.297547637Z" level=info msg="Completed buildkit initialization"
	Sep 04 01:07:08 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:08.306876337Z" level=info msg="Daemon has completed initialization"
	Sep 04 01:07:08 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:08.307295937Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 04 01:07:08 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:08.307509237Z" level=info msg="API listen on /run/docker.sock"
	Sep 04 01:07:08 NoKubernetes-686800 dockerd[1790]: time="2025-09-04T01:07:08.307585937Z" level=info msg="API listen on [::]:2376"
	Sep 04 01:07:08 NoKubernetes-686800 systemd[1]: Started Docker Application Container Engine.
	Sep 04 01:07:18 NoKubernetes-686800 cri-dockerd[1644]: time="2025-09-04T01:07:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ee9816d0851c0fb292dc6f6a44ded33dc65eb62d11cf460806583fe4321eec66/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 01:07:18 NoKubernetes-686800 cri-dockerd[1644]: time="2025-09-04T01:07:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ed682082af062960280734972e5efc07b32337a52cba63bc19f5219507d3774/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 01:07:18 NoKubernetes-686800 cri-dockerd[1644]: time="2025-09-04T01:07:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f1985cd111cd660fc2503606bb7d89f1253aa333811ab098baf3fa5a4155af62/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 01:07:18 NoKubernetes-686800 cri-dockerd[1644]: time="2025-09-04T01:07:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/42fb576b5ac7503b1318c9c2956379eaf1ca7a70abd0c15019fb25e838191b5f/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 01:07:33 NoKubernetes-686800 cri-dockerd[1644]: time="2025-09-04T01:07:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8be7b3239398027e4fdebd72c7c9236c474637e1c099b37ee74861344f4e32c/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 01:07:33 NoKubernetes-686800 cri-dockerd[1644]: time="2025-09-04T01:07:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c12f942a6291c8c73b4327bffd1cd9638978f2964d57490e1ed3ba55464011ee/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 01:07:36 NoKubernetes-686800 cri-dockerd[1644]: time="2025-09-04T01:07:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/199989a3c33a1f72cc595f032f1bb191bf51179211d1b791baa03d3fc6ef0142/resolv.conf as [nameserver 172.25.112.1]"
	Sep 04 01:07:37 NoKubernetes-686800 cri-dockerd[1644]: time="2025-09-04T01:07:37Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cea315b22a8a0       6e38f40d628db       54 seconds ago       Running             storage-provisioner       0                   199989a3c33a1       storage-provisioner
	ef37fd3562da0       52546a367cc9e       57 seconds ago       Running             coredns                   0                   c12f942a6291c       coredns-66bc5c9577-5xgvv
	0c5b51156cdd3       df0860106674d       57 seconds ago       Running             kube-proxy                0                   a8be7b3239398       kube-proxy-trxpw
	60a679e0efaa3       5f1f5298c888d       About a minute ago   Running             etcd                      0                   42fb576b5ac75       etcd-nokubernetes-686800
	78f8140b9e2dd       46169d968e920       About a minute ago   Running             kube-scheduler            0                   f1985cd111cd6       kube-scheduler-nokubernetes-686800
	c15702a229dff       a0af72f2ec6d6       About a minute ago   Running             kube-controller-manager   0                   4ed682082af06       kube-controller-manager-nokubernetes-686800
	a5df5420f2a0f       90550c43ad2bc       About a minute ago   Running             kube-apiserver            0                   ee9816d0851c0       kube-apiserver-nokubernetes-686800
	
	
	==> coredns [ef37fd3562da] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6a91ebc4603c280fffd96028976a93bc50f334c3ff12031fdaf482f119377dc83ef299e1deb76633d43d96f71e1d16982cd22dedeb608e78281d30e2ecaef945
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45895 - 48621 "HINFO IN 7643312044413664737.7115139713454274121. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036411612s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[Sep 4 01:05] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +0.002383] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.000012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001708] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +0.182540] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.003111] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.016710] (rpcbind)[114]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.596302] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 4 01:06] kauditd_printk_skb: 96 callbacks suppressed
	[  +0.214881] kauditd_printk_skb: 221 callbacks suppressed
	[Sep 4 01:07] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.174164] kauditd_printk_skb: 193 callbacks suppressed
	[  +0.278579] kauditd_printk_skb: 159 callbacks suppressed
	[  +5.132105] kauditd_printk_skb: 12 callbacks suppressed
	[Sep 4 01:08] kauditd_printk_skb: 213 callbacks suppressed
	
	
	==> etcd [60a679e0efaa] <==
	{"level":"warn","ts":"2025-09-04T01:07:21.374707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.390353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.412113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.432859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.453747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.490627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.503870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.525494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.568346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.575143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.599349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.643162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.679733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.688047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.711419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.746251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.747721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.774864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.786014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.799026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.817691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.834052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.846000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.859351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-04T01:07:21.978696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54692","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:08:30 up 3 min,  0 users,  load average: 1.23, 0.50, 0.19
	Linux NoKubernetes-686800 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a5df5420f2a0] <==
	I0904 01:07:23.244085       1 cache.go:39] Caches are synced for autoregister controller
	I0904 01:07:23.265074       1 controller.go:667] quota admission added evaluator for: namespaces
	I0904 01:07:23.318997       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I0904 01:07:23.328105       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0904 01:07:23.319111       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 01:07:23.481564       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 01:07:23.481837       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0904 01:07:23.931407       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0904 01:07:23.948776       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0904 01:07:23.949100       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0904 01:07:25.359657       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0904 01:07:25.461622       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0904 01:07:25.582862       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0904 01:07:25.600355       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.124.184]
	I0904 01:07:25.602429       1 controller.go:667] quota admission added evaluator for: endpoints
	I0904 01:07:25.615345       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0904 01:07:26.242038       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0904 01:07:26.617566       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0904 01:07:26.661789       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0904 01:07:26.687978       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0904 01:07:31.534334       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 01:07:31.548370       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0904 01:07:32.077124       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0904 01:07:32.329094       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0904 01:08:21.453969       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [c15702a229df] <==
	I0904 01:07:31.289874       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0904 01:07:31.290073       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0904 01:07:31.290210       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0904 01:07:31.290423       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0904 01:07:31.290568       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0904 01:07:31.291835       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0904 01:07:31.305991       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0904 01:07:31.307008       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0904 01:07:31.316628       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0904 01:07:31.323737       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0904 01:07:31.324278       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="nokubernetes-686800" podCIDRs=["10.244.0.0/24"]
	I0904 01:07:31.328748       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0904 01:07:31.329273       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0904 01:07:31.330145       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0904 01:07:31.332129       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0904 01:07:31.332679       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0904 01:07:31.332965       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0904 01:07:31.334271       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0904 01:07:31.335750       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0904 01:07:31.336117       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0904 01:07:31.337872       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	E0904 01:07:31.344157       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"nokubernetes-686800\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.1.0/24\",\"10.244.0.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="nokubernetes-686800" podCIDRs=["10.244.1.0/24"]
	E0904 01:07:31.344206       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"nokubernetes-686800\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.1.0/24\",\"10.244.0.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="nokubernetes-686800"
	E0904 01:07:31.344249       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'nokubernetes-686800': failed to patch node CIDR: Node \"nokubernetes-686800\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.1.0/24\",\"10.244.0.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0904 01:07:31.345784       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-proxy [0c5b51156cdd] <==
	I0904 01:07:33.866297       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0904 01:07:33.967040       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0904 01:07:33.967098       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["172.25.124.184"]
	E0904 01:07:33.967227       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 01:07:34.046616       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0904 01:07:34.046749       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 01:07:34.046792       1 server_linux.go:132] "Using iptables Proxier"
	I0904 01:07:34.067881       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 01:07:34.069556       1 server.go:527] "Version info" version="v1.34.0"
	I0904 01:07:34.070210       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 01:07:34.075467       1 config.go:200] "Starting service config controller"
	I0904 01:07:34.075562       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0904 01:07:34.075584       1 config.go:106] "Starting endpoint slice config controller"
	I0904 01:07:34.075589       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0904 01:07:34.075602       1 config.go:403] "Starting serviceCIDR config controller"
	I0904 01:07:34.075607       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0904 01:07:34.081797       1 config.go:309] "Starting node config controller"
	I0904 01:07:34.081835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0904 01:07:34.081843       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0904 01:07:34.176567       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0904 01:07:34.176609       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0904 01:07:34.176626       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [78f8140b9e2d] <==
	E0904 01:07:23.364450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 01:07:23.364607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 01:07:23.364644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 01:07:23.364719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 01:07:23.364763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 01:07:23.364973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0904 01:07:23.367352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0904 01:07:24.219669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0904 01:07:24.362729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0904 01:07:24.382788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0904 01:07:24.387031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0904 01:07:24.413060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0904 01:07:24.458523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0904 01:07:24.492431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0904 01:07:24.524333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0904 01:07:24.601761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0904 01:07:24.737197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0904 01:07:24.782478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0904 01:07:24.800598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0904 01:07:24.803147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0904 01:07:24.809612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0904 01:07:24.826356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0904 01:07:24.833028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0904 01:07:24.843716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0904 01:07:26.388576       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 04 01:07:27 NoKubernetes-686800 kubelet[2817]: I0904 01:07:27.175587    2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c217093bd248f929bdad9f2f899f13a-k8s-certs\") pod \"kube-apiserver-nokubernetes-686800\" (UID: \"4c217093bd248f929bdad9f2f899f13a\") " pod="kube-system/kube-apiserver-nokubernetes-686800"
	Sep 04 01:07:27 NoKubernetes-686800 kubelet[2817]: I0904 01:07:27.175612    2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c217093bd248f929bdad9f2f899f13a-usr-share-ca-certificates\") pod \"kube-apiserver-nokubernetes-686800\" (UID: \"4c217093bd248f929bdad9f2f899f13a\") " pod="kube-system/kube-apiserver-nokubernetes-686800"
	Sep 04 01:07:27 NoKubernetes-686800 kubelet[2817]: I0904 01:07:27.713701    2817 apiserver.go:52] "Watching apiserver"
	Sep 04 01:07:27 NoKubernetes-686800 kubelet[2817]: I0904 01:07:27.773609    2817 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 04 01:07:27 NoKubernetes-686800 kubelet[2817]: I0904 01:07:27.943761    2817 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-nokubernetes-686800"
	Sep 04 01:07:28 NoKubernetes-686800 kubelet[2817]: E0904 01:07:28.037904    2817 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-nokubernetes-686800\" already exists" pod="kube-system/etcd-nokubernetes-686800"
	Sep 04 01:07:28 NoKubernetes-686800 kubelet[2817]: I0904 01:07:28.193627    2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-nokubernetes-686800" podStartSLOduration=1.193608264 podStartE2EDuration="1.193608264s" podCreationTimestamp="2025-09-04 01:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 01:07:28.160176668 +0000 UTC m=+1.586759609" watchObservedRunningTime="2025-09-04 01:07:28.193608264 +0000 UTC m=+1.620191205"
	Sep 04 01:07:28 NoKubernetes-686800 kubelet[2817]: I0904 01:07:28.195617    2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-nokubernetes-686800" podStartSLOduration=1.195608088 podStartE2EDuration="1.195608088s" podCreationTimestamp="2025-09-04 01:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 01:07:28.193419062 +0000 UTC m=+1.620002003" watchObservedRunningTime="2025-09-04 01:07:28.195608088 +0000 UTC m=+1.622190929"
	Sep 04 01:07:28 NoKubernetes-686800 kubelet[2817]: I0904 01:07:28.285746    2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-nokubernetes-686800" podStartSLOduration=1.285724657 podStartE2EDuration="1.285724657s" podCreationTimestamp="2025-09-04 01:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 01:07:28.242099639 +0000 UTC m=+1.668682480" watchObservedRunningTime="2025-09-04 01:07:28.285724657 +0000 UTC m=+1.712307498"
	Sep 04 01:07:29 NoKubernetes-686800 kubelet[2817]: I0904 01:07:29.401292    2817 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Sep 04 01:07:32 NoKubernetes-686800 kubelet[2817]: I0904 01:07:32.377167    2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-nokubernetes-686800" podStartSLOduration=8.377056196 podStartE2EDuration="8.377056196s" podCreationTimestamp="2025-09-04 01:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 01:07:28.28768898 +0000 UTC m=+1.714271821" watchObservedRunningTime="2025-09-04 01:07:32.377056196 +0000 UTC m=+5.803639037"
	Sep 04 01:07:32 NoKubernetes-686800 kubelet[2817]: I0904 01:07:32.433520    2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4065c47a-c7c3-4645-aaa1-89f28c777c39-kube-proxy\") pod \"kube-proxy-trxpw\" (UID: \"4065c47a-c7c3-4645-aaa1-89f28c777c39\") " pod="kube-system/kube-proxy-trxpw"
	Sep 04 01:07:32 NoKubernetes-686800 kubelet[2817]: I0904 01:07:32.433630    2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4065c47a-c7c3-4645-aaa1-89f28c777c39-xtables-lock\") pod \"kube-proxy-trxpw\" (UID: \"4065c47a-c7c3-4645-aaa1-89f28c777c39\") " pod="kube-system/kube-proxy-trxpw"
	Sep 04 01:07:32 NoKubernetes-686800 kubelet[2817]: I0904 01:07:32.433684    2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4065c47a-c7c3-4645-aaa1-89f28c777c39-lib-modules\") pod \"kube-proxy-trxpw\" (UID: \"4065c47a-c7c3-4645-aaa1-89f28c777c39\") " pod="kube-system/kube-proxy-trxpw"
	Sep 04 01:07:32 NoKubernetes-686800 kubelet[2817]: I0904 01:07:32.433732    2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pld6k\" (UniqueName: \"kubernetes.io/projected/4065c47a-c7c3-4645-aaa1-89f28c777c39-kube-api-access-pld6k\") pod \"kube-proxy-trxpw\" (UID: \"4065c47a-c7c3-4645-aaa1-89f28c777c39\") " pod="kube-system/kube-proxy-trxpw"
	Sep 04 01:07:32 NoKubernetes-686800 kubelet[2817]: I0904 01:07:32.634157    2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2ffbdc2-0c09-405e-8dd3-2b7d700ddbf6-config-volume\") pod \"coredns-66bc5c9577-5xgvv\" (UID: \"c2ffbdc2-0c09-405e-8dd3-2b7d700ddbf6\") " pod="kube-system/coredns-66bc5c9577-5xgvv"
	Sep 04 01:07:32 NoKubernetes-686800 kubelet[2817]: I0904 01:07:32.634368    2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklch\" (UniqueName: \"kubernetes.io/projected/c2ffbdc2-0c09-405e-8dd3-2b7d700ddbf6-kube-api-access-pklch\") pod \"coredns-66bc5c9577-5xgvv\" (UID: \"c2ffbdc2-0c09-405e-8dd3-2b7d700ddbf6\") " pod="kube-system/coredns-66bc5c9577-5xgvv"
	Sep 04 01:07:33 NoKubernetes-686800 kubelet[2817]: I0904 01:07:33.222187    2817 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8be7b3239398027e4fdebd72c7c9236c474637e1c099b37ee74861344f4e32c"
	Sep 04 01:07:34 NoKubernetes-686800 kubelet[2817]: I0904 01:07:34.320188    2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-trxpw" podStartSLOduration=2.320154969 podStartE2EDuration="2.320154969s" podCreationTimestamp="2025-09-04 01:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 01:07:34.290869034 +0000 UTC m=+7.717451975" watchObservedRunningTime="2025-09-04 01:07:34.320154969 +0000 UTC m=+7.746737910"
	Sep 04 01:07:34 NoKubernetes-686800 kubelet[2817]: I0904 01:07:34.321117    2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5xgvv" podStartSLOduration=2.321107477 podStartE2EDuration="2.321107477s" podCreationTimestamp="2025-09-04 01:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 01:07:34.319582265 +0000 UTC m=+7.746165106" watchObservedRunningTime="2025-09-04 01:07:34.321107477 +0000 UTC m=+7.747690318"
	Sep 04 01:07:36 NoKubernetes-686800 kubelet[2817]: I0904 01:07:36.063394    2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq2dq\" (UniqueName: \"kubernetes.io/projected/83faa18e-acf5-42a1-ade4-dee41cdbda33-kube-api-access-kq2dq\") pod \"storage-provisioner\" (UID: \"83faa18e-acf5-42a1-ade4-dee41cdbda33\") " pod="kube-system/storage-provisioner"
	Sep 04 01:07:36 NoKubernetes-686800 kubelet[2817]: I0904 01:07:36.064273    2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/83faa18e-acf5-42a1-ade4-dee41cdbda33-tmp\") pod \"storage-provisioner\" (UID: \"83faa18e-acf5-42a1-ade4-dee41cdbda33\") " pod="kube-system/storage-provisioner"
	Sep 04 01:07:37 NoKubernetes-686800 kubelet[2817]: I0904 01:07:37.411409    2817 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 04 01:07:37 NoKubernetes-686800 kubelet[2817]: I0904 01:07:37.412875    2817 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 04 01:07:37 NoKubernetes-686800 kubelet[2817]: I0904 01:07:37.475103    2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.47508432 podStartE2EDuration="2.47508432s" podCreationTimestamp="2025-09-04 01:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 01:07:37.345533161 +0000 UTC m=+10.772116002" watchObservedRunningTime="2025-09-04 01:07:37.47508432 +0000 UTC m=+10.901667161"
	
	
	==> storage-provisioner [cea315b22a8a] <==
	W0904 01:08:05.371119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:07.376595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:07.391141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:09.396528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:09.406482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:11.412382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:11.429489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:13.434686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:13.445404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:15.450233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:15.464779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:17.470384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:17.478810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:19.483882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:19.500839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:21.504429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:21.515877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:23.532327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:23.547947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:25.554140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:25.595474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:27.600870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:27.610713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:29.617451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0904 01:08:29.631337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p NoKubernetes-686800 -n NoKubernetes-686800
helpers_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p NoKubernetes-686800 -n NoKubernetes-686800: (12.3962004s)
helpers_test.go:269: (dbg) Run:  kubectl --context NoKubernetes-686800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestNoKubernetes/serial/StartWithStopK8s FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (53.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10800.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-852200 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv
E0904 01:15:53.303878    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 3h0m0s
	running tests:
		TestCertExpiration (11m50s)
		TestForceSystemdFlag (7m20s)
		TestNetworkPlugins (33m17s)
		TestNetworkPlugins/group/auto (3m35s)
		TestNetworkPlugins/group/auto/Start (3m35s)
		TestNetworkPlugins/group/calico (1m19s)
		TestNetworkPlugins/group/calico/Start (1m19s)
		TestStartStop (24m10s)

                                                
                                                
goroutine 2488 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 4 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc000786a80, 0xc0008bbbc8)
	/usr/local/go/src/testing/testing.go:1798 +0x104
testing.runTests(0xc000a80000, {0x60b0f20, 0x2b, 0x2b}, {0xffffffffffffffff?, 0xc0008f71e0?, 0x60d9060?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc000c13720)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000c13720)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 2343 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc0014bae00)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014bae00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014bae00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc0014bae00, 0xc000caa180)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2341
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2341 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc0014ba8c0, 0x41b8f68)
	/usr/local/go/src/testing/testing.go:1798 +0x104
created by testing.(*T).Run in goroutine 2198
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2210 [chan receive, 34 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc000500540, 0xc00187c000)
	/usr/local/go/src/testing/testing.go:1798 +0x104
created by testing.(*T).Run in goroutine 2137
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2216 [chan receive, 34 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000501500)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000501500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000501500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000501500, 0xc00186e280)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2445 [syscall]:
syscall.Syscall6(0x1f3df410ed0?, 0x20000?, 0xc0014c1808?, 0xc001638000?, 0xc0014c1808?, 0xc001817bf0?, 0xda7f85?, 0xd88e90?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x7bc, {0xc001654ff0?, 0x3010, 0xdfe17f?}, 0x20000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc000b83688?, {0xc001654ff0?, 0x0?, 0x2?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc000b83688, {0xc001654ff0, 0x3010, 0x3010})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0001621c0, {0xc001654ff0?, 0x533b?, 0x533b?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000b81680, {0x4521080, 0xc0000c69e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x4521200, 0xc000b81680}, {0x4521080, 0xc0000c69e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001817e90?, {0x4521200, 0xc000b81680})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001817f38?, {0x4521200?, 0xc000b81680?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x4521200, 0xc000b81680}, {0x4521160, 0xc0001621c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc002092770?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 673
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2470 [syscall, 4 minutes]:
syscall.Syscall(0xc0015a9d00?, 0x1f3e4c43b98?, 0x5?, 0x10000060f71a0?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x5c0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000813500?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000813500)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000813500)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000107c00, 0xc000813500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000107c00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc000107c00, 0xc000c00f30)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2211
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2472 [syscall, 4 minutes]:
syscall.Syscall6(0x1f3e48b1968?, 0x1f3df4105a0?, 0x2000?, 0xc001930808?, 0xc0006dc000?, 0xc000609bf0?, 0xda7f79?, 0xc0005d4580?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x75c, {0xc0006ddc6e?, 0x392, 0xdfe17f?}, 0x2000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc00159cb48?, {0xc0006ddc6e?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc00159cb48, {0xc0006ddc6e, 0x392, 0x392})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000c0e0e0, {0xc0006ddc6e?, 0x1000?, 0x1000?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000c46270, {0x4521080, 0xc0000c6ad8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x4521200, 0xc000c46270}, {0x4521080, 0xc0000c6ad8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xdb0b97?, {0x4521200, 0xc000c46270})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000609fa0?, {0x4521200?, 0xc000c46270?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x4521200, 0xc000c46270}, {0x4521160, 0xc000c0e0e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x41b8c58?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2470
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 1290 [chan send, 141 minutes]:
os/exec.(*Cmd).watchCtx(0xc001876180, 0xc001821650)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 769
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2137 [chan receive, 34 minutes]:
testing.(*T).Run(0xc001546000, {0x37cf8a9?, 0xc000817f60?}, 0xc00187c000)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001546000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc001546000, 0x41b8d40)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 149 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 148
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 147 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000901c10, 0x3b)
	/usr/local/go/src/runtime/sema.go:597 +0x15d
sync.(*Cond).Wait(0xc00008bce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x4578620)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000bdc660)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0xd5a27c?, 0x61278a0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x4563610?, 0xc0005b40e0?}, 0xd49de5?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x4563610, 0xc0005b40e0}, 0xc001b81f50, {0x4522b60, 0xc000b80810}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x4522b60?, 0xc000b80810?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001080f0, 0x3b9aca00, 0x0, 0x1, 0xc0005b40e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 125
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 148 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x4563610, 0xc0005b40e0}, 0xc000cf7f50, 0xc000cf7f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x4563610, 0xc0005b40e0}, 0xa0?, 0xc000cf7f50, 0xc000cf7f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x4563610?, 0xc0005b40e0?}, 0x0?, 0x6b222c2232383236?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000cf7fd0?, 0xecc084?, 0xc000078f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 125
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 124 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x45755a0, {{0x456aa88, 0xc0000d5e80?}, 0xc0009016c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 123
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 125 [chan receive, 171 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc000bdc660, 0xc0005b40e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 123
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 705 [IO wait, 159 minutes]:
internal/poll.runtime_pollWait(0x1f3e4bbcb40, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xdfce13?, 0xd51ab6?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc00174a020, 0xc001717ba0)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).acceptOne(0xc00174a008, 0x2bc, {0xc000bb4000?, 0xc001717c00?, 0xe07545?}, 0xc001717c34?)
	/usr/local/go/src/internal/poll/fd_windows.go:946 +0x65
internal/poll.(*FD).Accept(0xc00174a008, 0xc001717d80)
	/usr/local/go/src/internal/poll/fd_windows.go:980 +0x1b6
net.(*netFD).accept(0xc00174a008)
	/usr/local/go/src/net/fd_windows.go:182 +0x4b
net.(*TCPListener).accept(0xc0007fa080)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc0007fa080)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc000926000, {0x4550cf0, 0xc0007fa080})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc000926000)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 702
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 894 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x45755a0, {{0x456aa88, 0xc0000d5e80?}, 0xc000bca8c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 870
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 2212 [chan receive, 34 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc0005008c0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0005008c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005008c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0005008c0, 0xc00186e080)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 914 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x4563610, 0xc0005b40e0}, 0xc0014b5f50, 0xc0014b5f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x4563610, 0xc0005b40e0}, 0xa0?, 0xc0014b5f50, 0xc0014b5f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x4563610?, 0xc0005b40e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014b5fd0?, 0xecc084?, 0xc000bb6460?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 895
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2465 [syscall, 2 minutes]:
syscall.Syscall6(0x1f3e49870d0?, 0x1f3df410ed0?, 0x2000?, 0xc000780808?, 0xc0006d6000?, 0xc000bdbbf0?, 0xda7f79?, 0xc000bdbbf8?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x7ec, {0xc0006d7bf2?, 0x40e, 0xdfe17f?}, 0x2000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc00044afc8?, {0xc0006d7bf2?, 0x0?, 0xc000bdbce0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc00044afc8, {0xc0006d7bf2, 0x40e, 0x40e})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000c6b30, {0xc0006d7bf2?, 0xc000bdbf38?, 0x2?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0019e0360, {0x4521080, 0xc000c0e100})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x4521200, 0xc0019e0360}, {0x4521080, 0xc000c0e100}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x4521200, 0xc0019e0360})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000bdbf38?, {0x4521200?, 0xc0019e0360?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x4521200, 0xc0019e0360}, {0x4521160, 0xc0000c6b30}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0005b49a0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2487
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2446 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc000813200, 0xc0005b5260)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 673
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 817 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001e38d90, 0x35)
	/usr/local/go/src/runtime/sema.go:597 +0x15d
sync.(*Cond).Wait(0xc000617ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x4578620)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001864900)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0xd5a27c?, 0x61278a0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x4563610?, 0xc0005b40e0?}, 0xd49de5?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x4563610, 0xc0005b40e0}, 0xc000617f50, {0x4522b60, 0xc00174c000}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000787340?, {0x4522b60?, 0xc00174c000?}, 0x26?, 0xc001a18180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e003e0, 0x3b9aca00, 0x0, 0x1, 0xc0005b40e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 895
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 915 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 914
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2215 [chan receive, 34 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000500fc0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000500fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000500fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000500fc0, 0xc00186e200)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 671 [syscall, 4 minutes]:
syscall.Syscall(0xc001f77b68?, 0x0?, 0xe8f83b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x744, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000813080?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000813080)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000813080)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014f28c0, 0xc000813080)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc0014f28c0)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:131 +0x576
testing.tRunner(0xc0014f28c0, 0x41b8c50)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 673 [syscall, 8 minutes]:
syscall.Syscall(0xc00060dbb0?, 0x0?, 0xe8f83b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x558, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000813200?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000813200)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000813200)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014f2c40, 0xc000813200)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc0014f2c40)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:91 +0x347
testing.tRunner(0xc0014f2c40, 0x41b8c98)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2211 [chan receive, 4 minutes]:
testing.(*T).Run(0xc000500700, {0x37cf8ae?, 0x4517f60?}, 0xc000c00f30)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000500700)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc000500700, 0xc00186e000)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2431 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc000813080, 0xc0005b4af0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 671
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2218 [chan receive, 34 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000501dc0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000501dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000501dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000501dc0, 0xc00186e380)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 895 [chan receive, 149 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc001864900, 0xc0005b40e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 870
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 2429 [syscall, 4 minutes]:
syscall.Syscall6(0x1f3e48b1aa8?, 0x1f3df410a38?, 0x800?, 0xc000781008?, 0xc000bb8800?, 0xc000bdfbf0?, 0xda7f79?, 0x200a2c2264637465?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x6a0, {0xc000bb8a05?, 0x5fb, 0xdfe17f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc000b83448?, {0xc000bb8a05?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc000b83448, {0xc000bb8a05, 0x5fb, 0x5fb})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000c6a30, {0xc000bb8a05?, 0x1?, 0x0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0019e01b0, {0x4521080, 0xc000c0e060})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x4521200, 0xc0019e01b0}, {0x4521080, 0xc000c0e060}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000bdfe90?, {0x4521200, 0xc0019e01b0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000bdfeb0?, {0x4521200?, 0xc0019e01b0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x4521200, 0xc0019e01b0}, {0x4521160, 0xc0000c6a30}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000bcbab0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 671
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2213 [chan receive, 34 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000500a80)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000500a80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000500a80)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000500a80, 0xc00186e100)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2214 [chan receive, 34 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000500c40)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000500c40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000500c40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000500c40, 0xc00186e180)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2198 [chan receive, 24 minutes]:
testing.(*T).Run(0xc001546c40, {0x37cf8a9?, 0xe92053?}, 0x41b8f68)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestStartStop(0xc001546c40)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001546c40, 0x41b8d88)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2217 [chan receive, 34 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000501880)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000501880)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000501880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000501880, 0xc00186e300)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2219 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0014ba1c0, {0x37cf8ae?, 0x4517f60?}, 0xc000c47d70)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0014ba1c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc0014ba1c0, 0xc00186e400)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2344 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc0014bafc0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014bafc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014bafc0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc0014bafc0, 0xc000caa1c0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2341
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2345 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc0014bb180)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014bb180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014bb180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc0014bb180, 0xc000caa240)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2341
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2342 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc0014baa80)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014baa80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014baa80)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc0014baa80, 0xc000caa080)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2341
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2346 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc0014bb340)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014bb340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014bb340)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc0014bb340, 0xc000caa280)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2341
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2347 [chan receive, 24 minutes]:
testing.(*testState).waitParallel(0xc000712640)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc0014bb6c0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0014bb6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014bb6c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc0014bb6c0, 0xc000caa380)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2341
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2444 [syscall]:
syscall.Syscall6(0x1f3e4cdbf28?, 0x1f3df410a38?, 0x800?, 0xc000580008?, 0xc000ba4000?, 0xc000befbf0?, 0xda7f79?, 0xc0015f3c60?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x5e4, {0xc000ba42ae?, 0x552, 0xdfe17f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc000b83208?, {0xc000ba42ae?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc000b83208, {0xc000ba42ae, 0x552, 0x552})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000162188, {0xc000ba42ae?, 0xc000beff38?, 0x2?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000b81650, {0x4521080, 0xc000c0e170})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x4521200, 0xc000b81650}, {0x4521080, 0xc000c0e170}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x4521200, 0xc000b81650})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000beff38?, {0x4521200?, 0xc000b81650?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x4521200, 0xc000b81650}, {0x4521160, 0xc000162188}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0016604d0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 673
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2430 [syscall, 4 minutes]:
syscall.Syscall6(0x1f3e4c43238?, 0x1f3df410ed0?, 0x200?, 0xc00006b808?, 0xc000c7e400?, 0xc001b7bbf0?, 0xda7f79?, 0x1?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x7c0, {0xc000c7e400?, 0x200, 0xdfe17f?}, 0x200?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc000b83b08?, {0xc000c7e400?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc000b83b08, {0xc000c7e400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000c6a80, {0xc000c7e400?, 0x1?, 0x0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0019e01e0, {0x4521080, 0xc000162140})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x4521200, 0xc0019e01e0}, {0x4521080, 0xc000162140}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x4521200, 0xc0019e01e0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001b7beb0?, {0x4521200?, 0xc0019e01e0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x4521200, 0xc0019e01e0}, {0x4521160, 0xc0000c6a80}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc002093810?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 671
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2473 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc000813500, 0xc001660310)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2470
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2498 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc001740180, 0xc0005b4bd0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2487
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2471 [syscall, 4 minutes]:
syscall.Syscall6(0x1f3e48a8678?, 0x1f3df410ed0?, 0x400?, 0xc0004ac008?, 0xc0005e6800?, 0xc000cedbf0?, 0xda7f79?, 0x3633363720202020?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x68c, {0xc0005e69e6?, 0x21a, 0xdfe17f?}, 0x400?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc00159c488?, {0xc0005e69e6?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc00159c488, {0xc0005e69e6, 0x21a, 0x21a})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000c0e0b0, {0xc0005e69e6?, 0x1?, 0x0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000c01e30, {0x4521080, 0xc000162168})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x4521200, 0xc000c01e30}, {0x4521080, 0xc000162168}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x4521200, 0xc000c01e30})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000cedeb0?, {0x4521200?, 0xc000c01e30?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x4521200, 0xc000c01e30}, {0x4521160, 0xc000c0e0b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2470
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2464 [syscall, 2 minutes]:
syscall.Syscall6(0x1f3e48a8678?, 0x1f3df410ed0?, 0x400?, 0x60db740?, 0xc0005e7800?, 0xc000bd5bf0?, 0xda7f79?, 0x101c000c10000?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x728, {0xc0005e79ec?, 0x214, 0xdfe17f?}, 0x400?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:454
syscall.Read(0xc000b838c8?, {0xc0005e79ec?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:433 +0x2d
internal/poll.(*FD).Read(0xc000b838c8, {0xc0005e79ec, 0x214, 0x214})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000c6b18, {0xc0005e79ec?, 0x1?, 0x0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0019e0330, {0x4521080, 0xc00072c038})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x4521200, 0xc0019e0330}, {0x4521080, 0xc00072c038}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000bd5e90?, {0x4521200, 0xc0019e0330})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000bd5eb0?, {0x4521200?, 0xc0019e0330?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x4521200, 0xc0019e0330}, {0x4521160, 0xc0000c6b18}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x41b8c58?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2487
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2487 [syscall, 2 minutes]:
syscall.Syscall(0xc0015a7d00?, 0x0?, 0xe8f83b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x560, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc001740180?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001740180)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc001740180)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0015476c0, 0xc001740180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0015476c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc0015476c0, 0xc000c47d70)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2219
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                    

Test pass (169/212)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.97
4 TestDownloadOnly/v1.20.0/preload-exists 0.09
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.76
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.87
12 TestDownloadOnly/v1.34.0/json-events 12.71
13 TestDownloadOnly/v1.34.0/preload-exists 0
16 TestDownloadOnly/v1.34.0/kubectl 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.27
18 TestDownloadOnly/v1.34.0/DeleteAll 0.92
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.66
21 TestBinaryMirror 7.02
22 TestOffline 547.21
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.29
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.27
27 TestAddons/Setup 490.1
29 TestAddons/serial/Volcano 65.95
31 TestAddons/serial/GCPAuth/Namespaces 0.33
32 TestAddons/serial/GCPAuth/FakeCredentials 11.69
35 TestAddons/parallel/Registry 39.76
36 TestAddons/parallel/RegistryCreds 15.88
37 TestAddons/parallel/Ingress 66.74
38 TestAddons/parallel/InspektorGadget 13.36
39 TestAddons/parallel/MetricsServer 21.69
41 TestAddons/parallel/CSI 88.18
42 TestAddons/parallel/Headlamp 44.16
43 TestAddons/parallel/CloudSpanner 23
44 TestAddons/parallel/LocalPath 91.63
45 TestAddons/parallel/NvidiaDevicePlugin 21.24
46 TestAddons/parallel/Yakd 28.02
48 TestAddons/StoppedEnableDisable 55.22
49 TestCertOptions 457.97
51 TestDockerFlags 387.64
53 TestForceSystemdEnv 507.56
60 TestErrorSpam/start 16.97
61 TestErrorSpam/status 36.05
62 TestErrorSpam/pause 22.33
63 TestErrorSpam/unpause 22.78
64 TestErrorSpam/stop 56.07
67 TestFunctional/serial/CopySyncFile 0.04
68 TestFunctional/serial/StartWithProxy 219.91
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 132.64
71 TestFunctional/serial/KubeContext 0.12
72 TestFunctional/serial/KubectlGetPods 0.23
75 TestFunctional/serial/CacheCmd/cache/add_remote 32.81
76 TestFunctional/serial/CacheCmd/cache/add_local 12.9
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.26
78 TestFunctional/serial/CacheCmd/cache/list 0.26
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.19
80 TestFunctional/serial/CacheCmd/cache/cache_reload 37.76
81 TestFunctional/serial/CacheCmd/cache/delete 0.52
82 TestFunctional/serial/MinikubeKubectlCmd 0.49
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.54
84 TestFunctional/serial/ExtraConfig 139.37
85 TestFunctional/serial/ComponentHealth 0.17
86 TestFunctional/serial/LogsCmd 8.4
87 TestFunctional/serial/LogsFileCmd 10.6
88 TestFunctional/serial/InvalidService 20.77
90 TestFunctional/parallel/ConfigCmd 1.75
94 TestFunctional/parallel/StatusCmd 41.91
98 TestFunctional/parallel/ServiceCmdConnect 61.5
99 TestFunctional/parallel/AddonsCmd 0.89
100 TestFunctional/parallel/PersistentVolumeClaim 77.92
102 TestFunctional/parallel/SSHCmd 22.56
103 TestFunctional/parallel/CpCmd 61.03
104 TestFunctional/parallel/MySQL 58.3
105 TestFunctional/parallel/FileSync 11.7
106 TestFunctional/parallel/CertSync 66.69
110 TestFunctional/parallel/NodeLabels 0.22
112 TestFunctional/parallel/NonActiveRuntimeDisabled 11.16
114 TestFunctional/parallel/License 1.81
115 TestFunctional/parallel/ServiceCmd/DeployApp 10.42
116 TestFunctional/parallel/Version/short 0.28
117 TestFunctional/parallel/Version/components 8.19
118 TestFunctional/parallel/ImageCommands/ImageListShort 7.78
119 TestFunctional/parallel/ImageCommands/ImageListTable 7.64
120 TestFunctional/parallel/ImageCommands/ImageListJson 7.78
121 TestFunctional/parallel/ImageCommands/ImageListYaml 7.65
122 TestFunctional/parallel/ImageCommands/ImageBuild 27.58
123 TestFunctional/parallel/ImageCommands/Setup 2.55
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 21.12
125 TestFunctional/parallel/ServiceCmd/List 14.1
126 TestFunctional/parallel/ServiceCmd/JSONOutput 14.15
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 20.84
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 21.27
131 TestFunctional/parallel/DockerEnv/powershell 47.66
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.18
134 TestFunctional/parallel/ImageCommands/ImageRemove 17.78
135 TestFunctional/parallel/UpdateContextCmd/no_changes 2.51
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.66
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.52
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 17.54
139 TestFunctional/parallel/ProfileCmd/profile_not_create 14.2
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 8.62
141 TestFunctional/parallel/ProfileCmd/profile_list 14.67
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.61
144 TestFunctional/parallel/ProfileCmd/profile_json_output 14.6
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.85
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.2
155 TestFunctional/delete_my-image_image 0.09
156 TestFunctional/delete_minikube_cached_images 0.08
161 TestMultiControlPlane/serial/StartCluster 746.71
162 TestMultiControlPlane/serial/DeployApp 12.62
164 TestMultiControlPlane/serial/AddWorkerNode 278.81
165 TestMultiControlPlane/serial/NodeLabels 0.19
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 48.39
167 TestMultiControlPlane/serial/CopyFile 629.6
171 TestImageBuild/serial/Setup 194.54
172 TestImageBuild/serial/NormalBuild 10.64
173 TestImageBuild/serial/BuildWithBuildArg 8.77
174 TestImageBuild/serial/BuildWithDockerIgnore 8.15
175 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.35
179 TestJSONOutput/start/Command 225.68
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 7.89
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 7.74
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 38.93
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.99
207 TestMainNoArgs 0.24
208 TestMinikubeProfile 525.45
211 TestMountStart/serial/StartWithMountFirst 150.08
212 TestMountStart/serial/VerifyMountFirst 9.34
213 TestMountStart/serial/StartWithMountSecond 149.8
214 TestMountStart/serial/VerifyMountSecond 9.4
215 TestMountStart/serial/DeleteFirst 30.09
216 TestMountStart/serial/VerifyMountPostDelete 9.11
217 TestMountStart/serial/Stop 27.47
218 TestMountStart/serial/RestartStopped 114.84
219 TestMountStart/serial/VerifyMountPostStop 9.23
222 TestMultiNode/serial/FreshStart2Nodes 437.22
223 TestMultiNode/serial/DeployApp2Nodes 9.47
225 TestMultiNode/serial/AddNode 235.29
226 TestMultiNode/serial/MultiNodeLabels 0.19
227 TestMultiNode/serial/ProfileList 35.39
228 TestMultiNode/serial/CopyFile 354.8
229 TestMultiNode/serial/StopNode 76.23
230 TestMultiNode/serial/StartAfterStop 188.07
235 TestPreload 522.86
236 TestScheduledStopWindows 323.4
241 TestRunningBinaryUpgrade 936.82
243 TestKubernetesUpgrade 1287.36
256 TestStoppedBinaryUpgrade/Setup 0.94
257 TestStoppedBinaryUpgrade/Upgrade 1089.45
266 TestPause/serial/Start 460.66
267 TestPause/serial/SecondStartNoReconfiguration 298.05
268 TestStoppedBinaryUpgrade/MinikubeLogs 10.15
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.43
271 TestNoKubernetes/serial/StartWithK8s 279.73
272 TestPause/serial/Pause 8.11
273 TestPause/serial/VerifyStatus 12.38
274 TestPause/serial/Unpause 9.52
275 TestPause/serial/PauseAgain 9.03
276 TestPause/serial/DeletePaused 48.07
277 TestPause/serial/VerifyDeletedResources 17.61
x
+
TestDownloadOnly/v1.20.0/json-events (16.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-813400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-813400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (16.9665095s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0903 22:17:17.511852    2220 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0903 22:17:17.599272    2220 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-813400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-813400: exit status 85 (302.37ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-813400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv │ download-only-813400 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 22:17 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 22:17:00
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 22:17:00.653866    9032 out.go:360] Setting OutFile to fd 712 ...
	I0903 22:17:00.728113    9032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:17:00.728113    9032 out.go:374] Setting ErrFile to fd 716...
	I0903 22:17:00.728113    9032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W0903 22:17:00.747584    9032 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0903 22:17:00.756595    9032 out.go:368] Setting JSON to true
	I0903 22:17:00.760637    9032 start.go:130] hostinfo: {"hostname":"minikube6","uptime":20926,"bootTime":1756916894,"procs":180,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0903 22:17:00.760637    9032 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0903 22:17:00.766730    9032 out.go:99] [download-only-813400] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	W0903 22:17:00.766730    9032 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0903 22:17:00.766730    9032 notify.go:220] Checking for updates...
	I0903 22:17:00.770078    9032 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0903 22:17:00.772957    9032 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0903 22:17:00.775777    9032 out.go:171] MINIKUBE_LOCATION=21341
	I0903 22:17:00.778998    9032 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0903 22:17:00.786666    9032 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0903 22:17:00.787399    9032 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 22:17:06.067578    9032 out.go:99] Using the hyperv driver based on user configuration
	I0903 22:17:06.067673    9032 start.go:304] selected driver: hyperv
	I0903 22:17:06.067673    9032 start.go:918] validating driver "hyperv" against <nil>
	I0903 22:17:06.068040    9032 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 22:17:06.128333    9032 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=65534MB, container=0MB
	I0903 22:17:06.128811    9032 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0903 22:17:06.128811    9032 cni.go:84] Creating CNI manager for ""
	I0903 22:17:06.129929    9032 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0903 22:17:06.130079    9032 start.go:348] cluster config:
	{Name:download-only-813400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:6144 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-813400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 22:17:06.130079    9032 iso.go:125] acquiring lock: {Name:mk966bde02eeea119c68f0830e579f0a83ec9e11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 22:17:06.134628    9032 out.go:99] Downloading VM boot image ...
	I0903 22:17:06.135236    9032 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.36.0-1753487480-21147-amd64.iso
	I0903 22:17:10.576298    9032 out.go:99] Starting "download-only-813400" primary control-plane node in "download-only-813400" cluster
	I0903 22:17:10.576298    9032 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0903 22:17:10.630604    9032 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0903 22:17:10.630604    9032 cache.go:58] Caching tarball of preloaded images
	I0903 22:17:10.631407    9032 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0903 22:17:10.635955    9032 out.go:99] Downloading Kubernetes v1.20.0 preload ...
	I0903 22:17:10.635955    9032 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0903 22:17:10.711373    9032 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0903 22:17:13.815385    9032 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0903 22:17:13.817355    9032 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0903 22:17:14.826706    9032 cache.go:61] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0903 22:17:14.827510    9032 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-813400\config.json ...
	I0903 22:17:14.828347    9032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-813400\config.json: {Name:mk5119f97f8a39084c390b5b5ec1201a764012de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:17:14.829071    9032 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0903 22:17:14.831437    9032 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-813400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-813400"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-813400
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (12.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-063800 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-063800 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=hyperv: (12.7054528s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (12.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0903 22:17:32.242014    2220 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0903 22:17:32.242014    2220 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
--- PASS: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-063800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-063800: exit status 85 (271.4648ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-813400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv │ download-only-813400 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 22:17 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 22:17 UTC │ 03 Sep 25 22:17 UTC │
	│ delete  │ -p download-only-813400                                                                                                                           │ download-only-813400 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 22:17 UTC │ 03 Sep 25 22:17 UTC │
	│ start   │ -o=json --download-only -p download-only-063800 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=hyperv │ download-only-063800 │ minikube6\jenkins │ v1.36.0 │ 03 Sep 25 22:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 22:17:19
	Running on machine: minikube6
	Binary: Built with gc go1.24.6 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 22:17:19.647560    9972 out.go:360] Setting OutFile to fd 848 ...
	I0903 22:17:19.725116    9972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:17:19.725116    9972 out.go:374] Setting ErrFile to fd 852...
	I0903 22:17:19.725116    9972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:17:19.749726    9972 out.go:368] Setting JSON to true
	I0903 22:17:19.752027    9972 start.go:130] hostinfo: {"hostname":"minikube6","uptime":20945,"bootTime":1756916894,"procs":180,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0903 22:17:19.752027    9972 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0903 22:17:19.766477    9972 out.go:99] [download-only-063800] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0903 22:17:19.766808    9972 notify.go:220] Checking for updates...
	I0903 22:17:19.769773    9972 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0903 22:17:19.772696    9972 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0903 22:17:19.775709    9972 out.go:171] MINIKUBE_LOCATION=21341
	I0903 22:17:19.778484    9972 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0903 22:17:19.784091    9972 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0903 22:17:19.785499    9972 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 22:17:25.150243    9972 out.go:99] Using the hyperv driver based on user configuration
	I0903 22:17:25.150243    9972 start.go:304] selected driver: hyperv
	I0903 22:17:25.150243    9972 start.go:918] validating driver "hyperv" against <nil>
	I0903 22:17:25.150540    9972 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 22:17:25.209459    9972 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=65534MB, container=0MB
	I0903 22:17:25.210405    9972 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0903 22:17:25.210405    9972 cni.go:84] Creating CNI manager for ""
	I0903 22:17:25.210405    9972 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0903 22:17:25.210405    9972 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0903 22:17:25.210405    9972 start.go:348] cluster config:
	{Name:download-only-063800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:6144 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-063800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 22:17:25.211437    9972 iso.go:125] acquiring lock: {Name:mk966bde02eeea119c68f0830e579f0a83ec9e11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 22:17:25.215162    9972 out.go:99] Starting "download-only-063800" primary control-plane node in "download-only-063800" cluster
	I0903 22:17:25.215238    9972 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 22:17:25.264070    9972 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0903 22:17:25.264200    9972 cache.go:58] Caching tarball of preloaded images
	I0903 22:17:25.264838    9972 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0903 22:17:25.268399    9972 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0903 22:17:25.268435    9972 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	I0903 22:17:25.350724    9972 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4?checksum=md5:994a4de1464928e89c992dfd0a962e35 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-063800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-063800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-063800
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.66s)

                                                
                                    
x
+
TestBinaryMirror (7.02s)

                                                
                                                
=== RUN   TestBinaryMirror
I0903 22:17:35.740724    2220 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-118200 --alsologtostderr --binary-mirror http://127.0.0.1:58392 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-118200 --alsologtostderr --binary-mirror http://127.0.0.1:58392 --driver=hyperv: (6.2595819s)
helpers_test.go:175: Cleaning up "binary-mirror-118200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-118200
--- PASS: TestBinaryMirror (7.02s)

                                                
                                    
x
+
TestOffline (547.21s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-143600 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-143600 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=hyperv: (8m26.115977s)
helpers_test.go:175: Cleaning up "offline-docker-143600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-143600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-143600: (41.0884736s)
--- PASS: TestOffline (547.21s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-933200
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-933200: exit status 85 (294.0109ms)

                                                
                                                
-- stdout --
	* Profile "addons-933200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-933200"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-933200
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-933200: exit status 85 (273.8056ms)

                                                
                                                
-- stdout --
	* Profile "addons-933200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-933200"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/Setup (490.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-933200 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-933200 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (8m10.0943513s)
--- PASS: TestAddons/Setup (490.10s)

                                                
                                    
x
+
TestAddons/serial/Volcano (65.95s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 25.8235ms
addons_test.go:868: volcano-scheduler stabilized in 26.5121ms
addons_test.go:884: volcano-controller stabilized in 29.6205ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-ncc8b" [058e391f-ee1b-418e-bc5d-b8580836b7c7] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0082356s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-75wwl" [a24b716f-89dd-4d28-897e-c1ae4692c3cf] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0060213s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-x2xsn" [408976ce-54c6-4bee-aada-3345f3c29f40] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0048994s
addons_test.go:903: (dbg) Run:  kubectl --context addons-933200 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-933200 create -f testdata\vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-933200 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [a4fd2afe-5716-4ca4-881a-18dc792c466a] Pending
helpers_test.go:352: "test-job-nginx-0" [a4fd2afe-5716-4ca4-881a-18dc792c466a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [a4fd2afe-5716-4ca4-881a-18dc792c466a] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 22.0078538s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable volcano --alsologtostderr -v=1: (26.000931s)
--- PASS: TestAddons/serial/Volcano (65.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-933200 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-933200 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.69s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-933200 create -f testdata\busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-933200 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d293c517-6062-4149-ac32-b42d5e1ae55a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d293c517-6062-4149-ac32-b42d5e1ae55a] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.0060213s
addons_test.go:694: (dbg) Run:  kubectl --context addons-933200 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-933200 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-933200 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-933200 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.69s)

                                                
                                    
x
+
TestAddons/parallel/Registry (39.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.3634ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-44cl9" [197ad94a-1902-4078-91fd-dcaa903a7e38] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0041272s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-qrlpc" [b8ab126b-ad3f-4cb4-a36e-27b9c6bf6075] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0059226s
addons_test.go:392: (dbg) Run:  kubectl --context addons-933200 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-933200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-933200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.0574561s)
addons_test.go:411: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 ip
addons_test.go:411: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 ip: (2.5595719s)
2025/09/03 22:28:03 [DEBUG] GET http://172.25.127.18:5000
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable registry --alsologtostderr -v=1: (16.8777918s)
--- PASS: TestAddons/parallel/Registry (39.76s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (15.88s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 13.2125ms
addons_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-933200
addons_test.go:332: (dbg) Run:  kubectl --context addons-933200 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable registry-creds --alsologtostderr -v=1: (15.2884535s)
--- PASS: TestAddons/parallel/RegistryCreds (15.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (66.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-933200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-933200 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-933200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f489b53c-2f7b-4bcf-8106-80f8470b9d1d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f489b53c-2f7b-4bcf-8106-80f8470b9d1d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0084031s
I0903 22:29:00.529079    2220 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.7828736s)
addons_test.go:288: (dbg) Run:  kubectl --context addons-933200 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 ip: (2.5303352s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.25.127.18
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable ingress-dns --alsologtostderr -v=1: (16.5558641s)
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable ingress --alsologtostderr -v=1: (22.3507382s)
--- PASS: TestAddons/parallel/Ingress (66.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (13.36s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-pq74f" [9b9c27df-e757-425b-8e27-7a167d20d17f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0077526s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable inspektor-gadget --alsologtostderr -v=1: (7.349982s)
--- PASS: TestAddons/parallel/InspektorGadget (13.36s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 13.5011ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-t7xg2" [c633a84c-41f2-44d2-90d4-df6ea5976d18] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0068235s
addons_test.go:463: (dbg) Run:  kubectl --context addons-933200 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable metrics-server --alsologtostderr -v=1: (15.4660904s)
--- PASS: TestAddons/parallel/MetricsServer (21.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (88.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0903 22:28:24.749798    2220 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0903 22:28:24.768559    2220 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0903 22:28:24.768559    2220 kapi.go:107] duration metric: took 18.8164ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 18.8164ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-933200 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-933200 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [0610b557-0570-46e6-a166-a4878b9643ca] Pending
helpers_test.go:352: "task-pv-pod" [0610b557-0570-46e6-a166-a4878b9643ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [0610b557-0570-46e6-a166-a4878b9643ca] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.006591s
addons_test.go:572: (dbg) Run:  kubectl --context addons-933200 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-933200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-933200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-933200 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-933200 delete pod task-pv-pod: (1.5611694s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-933200 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-933200 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-933200 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [bcefabfd-9da2-403f-9f05-a567849a252b] Pending
helpers_test.go:352: "task-pv-pod-restore" [bcefabfd-9da2-403f-9f05-a567849a252b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [bcefabfd-9da2-403f-9f05-a567849a252b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00835s
addons_test.go:614: (dbg) Run:  kubectl --context addons-933200 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-933200 delete pod task-pv-pod-restore: (2.0301242s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-933200 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-933200 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable volumesnapshots --alsologtostderr -v=1: (15.684002s)
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.5273365s)
--- PASS: TestAddons/parallel/CSI (88.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (44.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-933200 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-933200 --alsologtostderr -v=1: (15.7381342s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-r9xxc" [94d923e8-677e-4e12-9fa9-998a40a3a16d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-r9xxc" [94d923e8-677e-4e12-9fa9-998a40a3a16d] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 20.0072744s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable headlamp --alsologtostderr -v=1: (8.4126099s)
--- PASS: TestAddons/parallel/Headlamp (44.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (23s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-vw5bg" [3bbb562d-6618-48ac-958b-29c7445b3c38] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0554842s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable cloud-spanner --alsologtostderr -v=1: (16.912778s)
--- PASS: TestAddons/parallel/CloudSpanner (23.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (91.63s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-933200 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-933200 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-933200 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f5e54f2b-1891-4b5c-9c00-878e905d4e60] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f5e54f2b-1891-4b5c-9c00-878e905d4e60] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f5e54f2b-1891-4b5c-9c00-878e905d4e60] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 11.0049143s
addons_test.go:967: (dbg) Run:  kubectl --context addons-933200 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 ssh "cat /opt/local-path-provisioner/pvc-8bb3816a-c18b-4a5c-bf63-5734d2783b2f_default_test-pvc/file1"
addons_test.go:976: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 ssh "cat /opt/local-path-provisioner/pvc-8bb3816a-c18b-4a5c-bf63-5734d2783b2f_default_test-pvc/file1": (10.7252367s)
addons_test.go:988: (dbg) Run:  kubectl --context addons-933200 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-933200 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m2.0297537s)
--- PASS: TestAddons/parallel/LocalPath (91.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-znlqf" [99b80bf5-e5b1-4d92-996a-904eea0a19d8] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0046868s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable nvidia-device-plugin --alsologtostderr -v=1: (15.2272582s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (28.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-ck8c4" [37975cd5-71a8-4a37-8be1-821da08a0cca] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0063576s
addons_test.go:1053: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-933200 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-windows-amd64.exe -p addons-933200 addons disable yakd --alsologtostderr -v=1: (22.0077292s)
--- PASS: TestAddons/parallel/Yakd (28.02s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (55.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-933200
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-933200: (42.3863096s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-933200
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-933200: (5.4048704s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-933200
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-933200: (4.7109083s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-933200
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-933200: (2.7124375s)
--- PASS: TestAddons/StoppedEnableDisable (55.22s)

                                                
                                    
x
+
TestCertOptions (457.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-482800 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-482800 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (6m31.3918385s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-482800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-482800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.1141375s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-482800 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-482800 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-482800 -- "sudo cat /etc/kubernetes/admin.conf": (10.0622526s)
helpers_test.go:175: Cleaning up "cert-options-482800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-482800
E0904 01:13:04.830944    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-482800: (46.24593s)
--- PASS: TestCertOptions (457.97s)

                                                
                                    
x
+
TestDockerFlags (387.64s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-240000 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-240000 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (5m27.0864325s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-240000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-240000 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.0535357s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-240000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-240000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.9284389s)
helpers_test.go:175: Cleaning up "docker-flags-240000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-240000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-240000: (40.5662912s)
--- PASS: TestDockerFlags (387.64s)

                                                
                                    
x
+
TestForceSystemdEnv (507.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-259400 --memory=3072 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-259400 --memory=3072 --alsologtostderr -v=5 --driver=hyperv: (7m30.9128215s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-259400 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-259400 ssh "docker info --format {{.CgroupDriver}}": (9.9920213s)
helpers_test.go:175: Cleaning up "force-systemd-env-259400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-259400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-259400: (46.6539508s)
--- PASS: TestForceSystemdEnv (507.56s)

                                                
                                    
x
+
TestErrorSpam/start (16.97s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 start --dry-run
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 start --dry-run: (5.6098793s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 start --dry-run
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 start --dry-run: (5.639412s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 start --dry-run
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 start --dry-run: (5.7180467s)
--- PASS: TestErrorSpam/start (16.97s)

                                                
                                    
x
+
TestErrorSpam/status (36.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 status
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 status: (12.3065053s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 status
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 status: (12.0546132s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 status
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 status: (11.6859308s)
--- PASS: TestErrorSpam/status (36.05s)

                                                
                                    
x
+
TestErrorSpam/pause (22.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 pause: (7.6712555s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 pause
E0903 22:35:53.170766    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:35:53.178178    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:35:53.190683    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:35:53.212675    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 pause: (7.406172s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 pause
E0903 22:35:53.254692    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:35:53.336553    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:35:53.499116    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:35:53.821262    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:35:54.463808    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:35:55.745912    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:35:58.308278    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 pause: (7.2449024s)
--- PASS: TestErrorSpam/pause (22.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 unpause
E0903 22:36:03.431505    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 unpause: (7.6418629s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 unpause
E0903 22:36:13.673951    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 unpause: (7.5582233s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 unpause
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 unpause: (7.5724915s)
--- PASS: TestErrorSpam/unpause (22.78s)

                                                
                                    
x
+
TestErrorSpam/stop (56.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 stop
E0903 22:36:34.156897    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 stop: (35.0930261s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 stop: (10.7324206s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 stop
E0903 22:37:15.120768    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-921000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-921000 stop: (10.2364416s)
--- PASS: TestErrorSpam/stop (56.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\2220\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (219.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-228500 --memory=4096 --apiserver-port=8441 --wait=all --driver=hyperv
E0903 22:38:37.044253    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:40:53.174831    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-228500 --memory=4096 --apiserver-port=8441 --wait=all --driver=hyperv: (3m39.899807s)
--- PASS: TestFunctional/serial/StartWithProxy (219.91s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (132.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0903 22:41:15.249339    2220 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-228500 --alsologtostderr -v=8
E0903 22:41:20.889297    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-228500 --alsologtostderr -v=8: (2m12.6419126s)
functional_test.go:678: soft start took 2m12.6433333s for "functional-228500" cluster.
I0903 22:43:27.893961    2220 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (132.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.12s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-228500 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (32.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 cache add registry.k8s.io/pause:3.1: (11.0327436s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 cache add registry.k8s.io/pause:3.3: (10.9115219s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 cache add registry.k8s.io/pause:latest: (10.8650446s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (32.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (12.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-228500 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3680509754\001
functional_test.go:1092: (dbg) Done: docker build -t minikube-local-cache-test:functional-228500 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3680509754\001: (1.9860095s)
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 cache add minikube-local-cache-test:functional-228500
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 cache add minikube-local-cache-test:functional-228500: (10.5388713s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 cache delete minikube-local-cache-test:functional-228500
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-228500
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (12.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh sudo crictl images
functional_test.go:1139: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh sudo crictl images: (9.1863707s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (37.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1162: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.2542659s)
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.0672796s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 cache reload: (10.3184485s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1178: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.1132304s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (37.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 kubectl -- --context functional-228500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-228500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (139.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-228500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0903 22:45:53.179331    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-228500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m19.365759s)
functional_test.go:776: restart took 2m19.365759s for "functional-228500" cluster.
I0903 22:47:24.344759    2220 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (139.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-228500 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 logs: (8.4025745s)
--- PASS: TestFunctional/serial/LogsCmd (8.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd608039127\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd608039127\001\logs.txt: (10.5789207s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.77s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-228500 apply -f testdata\invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-228500
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-228500: exit status 115 (16.6309359s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://172.25.120.45:32690 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-228500 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (20.77s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228500 config get cpus: exit status 14 (248.9269ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228500 config get cpus: exit status 14 (258.6349ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (41.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 status
functional_test.go:869: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 status: (14.1286933s)
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.328472s)
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 status -o json
functional_test.go:887: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 status -o json: (13.4537576s)
--- PASS: TestFunctional/parallel/StatusCmd (41.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (61.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-228500 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-228500 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-m6wvj" [25bb3924-d40c-4399-a220-01257f4ab917] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-m6wvj" [25bb3924-d40c-4399-a220-01257f4ab917] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 43.0066893s
functional_test.go:1654: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 service hello-node-connect --url
functional_test.go:1654: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 service hello-node-connect --url: (17.8333979s)
functional_test.go:1660: found endpoint for hello-node-connect: http://172.25.120.45:30568
functional_test.go:1680: http://172.25.120.45:30568: success! body:
Request served by hello-node-connect-7d85dfc575-m6wvj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 172.25.120.45:30568
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (61.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (77.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [7ba89912-42ec-47a2-9030-8d30bbfe18ca] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006561s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-228500 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-228500 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-228500 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-228500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8319fb5f-21ff-44b4-a752-9d7bca4a4869] Pending
helpers_test.go:352: "sp-pod" [8319fb5f-21ff-44b4-a752-9d7bca4a4869] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8319fb5f-21ff-44b4-a752-9d7bca4a4869] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.0104336s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-228500 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-228500 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-228500 delete -f testdata/storage-provisioner/pod.yaml: (1.5255067s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-228500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e175468d-978f-4155-bd56-069844cdb743] Pending
helpers_test.go:352: "sp-pod" [e175468d-978f-4155-bd56-069844cdb743] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e175468d-978f-4155-bd56-069844cdb743] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 43.007418s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-228500 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (77.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (22.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh "echo hello"
functional_test.go:1730: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh "echo hello": (10.9553795s)
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh "cat /etc/hostname"
functional_test.go:1747: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh "cat /etc/hostname": (11.6004076s)
--- PASS: TestFunctional/parallel/SSHCmd (22.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (61.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.3999228s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh -n functional-228500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh -n functional-228500 "sudo cat /home/docker/cp-test.txt": (10.6822467s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 cp functional-228500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd486823970\001\cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 cp functional-228500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd486823970\001\cp-test.txt: (11.2268099s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh -n functional-228500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh -n functional-228500 "sudo cat /home/docker/cp-test.txt": (10.8652358s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.3575161s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh -n functional-228500 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh -n functional-228500 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.4937417s)
--- PASS: TestFunctional/parallel/CpCmd (61.03s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (58.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-228500 replace --force -f testdata\mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-6rd92" [ee56751e-0520-4dfe-b8e6-5bb98d80529a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-6rd92" [ee56751e-0520-4dfe-b8e6-5bb98d80529a] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 46.005235s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-228500 exec mysql-5bb876957f-6rd92 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-228500 exec mysql-5bb876957f-6rd92 -- mysql -ppassword -e "show databases;": exit status 1 (274.1651ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0903 22:51:24.208905    2220 retry.go:31] will retry after 762.669734ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-228500 exec mysql-5bb876957f-6rd92 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-228500 exec mysql-5bb876957f-6rd92 -- mysql -ppassword -e "show databases;": exit status 1 (304.6373ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0903 22:51:25.289314    2220 retry.go:31] will retry after 1.388478631s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-228500 exec mysql-5bb876957f-6rd92 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-228500 exec mysql-5bb876957f-6rd92 -- mysql -ppassword -e "show databases;": exit status 1 (271.0763ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0903 22:51:26.962835    2220 retry.go:31] will retry after 3.186308845s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-228500 exec mysql-5bb876957f-6rd92 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-228500 exec mysql-5bb876957f-6rd92 -- mysql -ppassword -e "show databases;": exit status 1 (296.6955ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0903 22:51:30.460366    2220 retry.go:31] will retry after 4.890038062s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-228500 exec mysql-5bb876957f-6rd92 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (58.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2220/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /etc/test/nested/copy/2220/hosts"
functional_test.go:1936: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /etc/test/nested/copy/2220/hosts": (11.6950189s)
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (66.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2220.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /etc/ssl/certs/2220.pem"
functional_test.go:1978: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /etc/ssl/certs/2220.pem": (11.0093867s)
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2220.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /usr/share/ca-certificates/2220.pem"
functional_test.go:1978: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /usr/share/ca-certificates/2220.pem": (11.177144s)
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1978: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.1245693s)
functional_test.go:2004: Checking for existence of /etc/ssl/certs/22202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /etc/ssl/certs/22202.pem"
functional_test.go:2005: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /etc/ssl/certs/22202.pem": (10.9448329s)
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/22202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /usr/share/ca-certificates/22202.pem"
functional_test.go:2005: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /usr/share/ca-certificates/22202.pem": (10.7304026s)
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2005: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (11.6998221s)
--- PASS: TestFunctional/parallel/CertSync (66.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-228500 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228500 ssh "sudo systemctl is-active crio": exit status 1 (11.15952s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.16s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (1.7895009s)
--- PASS: TestFunctional/parallel/License (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-228500 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-228500 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-78hzs" [4202c029-179e-4fc1-a51a-bdba810327bc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-78hzs" [4202c029-179e-4fc1-a51a-bdba810327bc] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.0083072s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 version -o=json --components: (8.1930426s)
--- PASS: TestFunctional/parallel/Version/components (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image ls --format short --alsologtostderr: (7.7791782s)
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-228500 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-228500
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-228500
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-228500 image ls --format short --alsologtostderr:
I0903 22:51:36.556262   12612 out.go:360] Setting OutFile to fd 1228 ...
I0903 22:51:36.630266   12612 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:51:36.630266   12612 out.go:374] Setting ErrFile to fd 1028...
I0903 22:51:36.630266   12612 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:51:36.648870   12612 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0903 22:51:36.648870   12612 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0903 22:51:36.649596   12612 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228500 ).state
I0903 22:51:38.995961   12612 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0903 22:51:38.995961   12612 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:39.021511   12612 ssh_runner.go:195] Run: systemctl --version
I0903 22:51:39.021511   12612 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228500 ).state
I0903 22:51:41.267266   12612 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0903 22:51:41.268007   12612 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:41.268091   12612 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228500 ).networkadapters[0]).ipaddresses[0]
I0903 22:51:44.002512   12612 main.go:141] libmachine: [stdout =====>] : 172.25.120.45

                                                
                                                
I0903 22:51:44.003015   12612 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:44.003757   12612 sshutil.go:53] new ssh client: &{IP:172.25.120.45 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228500\id_rsa Username:docker}
I0903 22:51:44.126313   12612 ssh_runner.go:235] Completed: systemctl --version: (5.104731s)
I0903 22:51:44.137729   12612 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image ls --format table --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image ls --format table --alsologtostderr: (7.6347188s)
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-228500 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ 90550c43ad2bc │ 88MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ 46169d968e920 │ 52.8MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ a0af72f2ec6d6 │ 74.9MB │
│ docker.io/kicbase/echo-server               │ functional-228500 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ docker.io/library/minikube-local-cache-test │ functional-228500 │ b02960d41f974 │ 30B    │
│ docker.io/library/nginx                     │ latest            │ ad5708199ec7d │ 192MB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ docker.io/library/nginx                     │ alpine            │ 4a86014ec6994 │ 52.5MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ df0860106674d │ 71.9MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ docker.io/library/mysql                     │ 5.7               │ 5107333e08a87 │ 501MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-228500 image ls --format table --alsologtostderr:
I0903 22:51:46.086249    3684 out.go:360] Setting OutFile to fd 1792 ...
I0903 22:51:46.159862    3684 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:51:46.159862    3684 out.go:374] Setting ErrFile to fd 1408...
I0903 22:51:46.159862    3684 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:51:46.175431    3684 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0903 22:51:46.176061    3684 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0903 22:51:46.176841    3684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228500 ).state
I0903 22:51:48.477398    3684 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0903 22:51:48.477693    3684 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:48.495901    3684 ssh_runner.go:195] Run: systemctl --version
I0903 22:51:48.495901    3684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228500 ).state
I0903 22:51:50.816219    3684 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0903 22:51:50.816219    3684 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:50.816950    3684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228500 ).networkadapters[0]).ipaddresses[0]
I0903 22:51:53.408477    3684 main.go:141] libmachine: [stdout =====>] : 172.25.120.45

                                                
                                                
I0903 22:51:53.408477    3684 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:53.409369    3684 sshutil.go:53] new ssh client: &{IP:172.25.120.45 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228500\id_rsa Username:docker}
I0903 22:51:53.507311    3684 ssh_runner.go:235] Completed: systemctl --version: (5.0112744s)
I0903 22:51:53.517682    3684 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image ls --format json --alsologtostderr: (7.7821464s)
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-228500 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"88000000"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":[],
"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"52800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-228500","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"b02960d41f974df2ab84cda9c58ebcb935e3579a7faf15eb71f2ecfa76fac0d4","repoDigests":[],"repoTags":["docker.i
o/library/minikube-local-cache-test:functional-228500"],"size":"30"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"74900000"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"71900000"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52500000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-228500 image ls --format json --alsologtostderr:
I0903 22:51:44.335765   12056 out.go:360] Setting OutFile to fd 1444 ...
I0903 22:51:44.408793   12056 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:51:44.408793   12056 out.go:374] Setting ErrFile to fd 1456...
I0903 22:51:44.408793   12056 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:51:44.423784   12056 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0903 22:51:44.423784   12056 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0903 22:51:44.424784   12056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228500 ).state
I0903 22:51:46.689998   12056 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0903 22:51:46.689998   12056 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:46.705905   12056 ssh_runner.go:195] Run: systemctl --version
I0903 22:51:46.705905   12056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228500 ).state
I0903 22:51:49.010525   12056 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0903 22:51:49.011073   12056 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:49.011242   12056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228500 ).networkadapters[0]).ipaddresses[0]
I0903 22:51:51.815170   12056 main.go:141] libmachine: [stdout =====>] : 172.25.120.45

                                                
                                                
I0903 22:51:51.815170   12056 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:51.815170   12056 sshutil.go:53] new ssh client: &{IP:172.25.120.45 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228500\id_rsa Username:docker}
I0903 22:51:51.910447   12056 ssh_runner.go:235] Completed: systemctl --version: (5.2044692s)
I0903 22:51:51.922383   12056 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image ls --format yaml --alsologtostderr: (7.6456985s)
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-228500 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "88000000"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "74900000"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52500000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "71900000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-228500
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: b02960d41f974df2ab84cda9c58ebcb935e3579a7faf15eb71f2ecfa76fac0d4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-228500
size: "30"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "52800000"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-228500 image ls --format yaml --alsologtostderr:
I0903 22:51:38.434198    6580 out.go:360] Setting OutFile to fd 1584 ...
I0903 22:51:38.540457    6580 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:51:38.540457    6580 out.go:374] Setting ErrFile to fd 1108...
I0903 22:51:38.540457    6580 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:51:38.558648    6580 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0903 22:51:38.560203    6580 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0903 22:51:38.560527    6580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228500 ).state
I0903 22:51:40.830638    6580 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0903 22:51:40.830638    6580 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:40.843252    6580 ssh_runner.go:195] Run: systemctl --version
I0903 22:51:40.843252    6580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228500 ).state
I0903 22:51:43.048273    6580 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0903 22:51:43.048273    6580 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:43.048273    6580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228500 ).networkadapters[0]).ipaddresses[0]
I0903 22:51:45.750956    6580 main.go:141] libmachine: [stdout =====>] : 172.25.120.45

                                                
                                                
I0903 22:51:45.751021    6580 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:45.751021    6580 sshutil.go:53] new ssh client: &{IP:172.25.120.45 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228500\id_rsa Username:docker}
I0903 22:51:45.871766    6580 ssh_runner.go:235] Completed: systemctl --version: (5.028444s)
I0903 22:51:45.883393    6580 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (27.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-228500 ssh pgrep buildkitd: exit status 1 (10.0516313s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image build -t localhost/my-image:functional-228500 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image build -t localhost/my-image:functional-228500 testdata\build --alsologtostderr: (10.5279773s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-228500 image build -t localhost/my-image:functional-228500 testdata\build --alsologtostderr:
I0903 22:51:52.806686   13104 out.go:360] Setting OutFile to fd 1268 ...
I0903 22:51:52.903590   13104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:51:52.903590   13104 out.go:374] Setting ErrFile to fd 1228...
I0903 22:51:52.903590   13104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:51:52.922554   13104 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0903 22:51:52.943056   13104 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0903 22:51:52.943669   13104 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228500 ).state
I0903 22:51:55.046978   13104 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0903 22:51:55.046978   13104 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:55.059758   13104 ssh_runner.go:195] Run: systemctl --version
I0903 22:51:55.059758   13104 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-228500 ).state
I0903 22:51:57.078068   13104 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0903 22:51:57.078068   13104 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:57.078068   13104 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-228500 ).networkadapters[0]).ipaddresses[0]
I0903 22:51:59.456263   13104 main.go:141] libmachine: [stdout =====>] : 172.25.120.45

                                                
                                                
I0903 22:51:59.456263   13104 main.go:141] libmachine: [stderr =====>] : 
I0903 22:51:59.457242   13104 sshutil.go:53] new ssh client: &{IP:172.25.120.45 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-228500\id_rsa Username:docker}
I0903 22:51:59.568162   13104 ssh_runner.go:235] Completed: systemctl --version: (4.5083412s)
I0903 22:51:59.568162   13104 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2103315020.tar
I0903 22:51:59.581853   13104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0903 22:51:59.614389   13104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2103315020.tar
I0903 22:51:59.622403   13104 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2103315020.tar: stat -c "%s %y" /var/lib/minikube/build/build.2103315020.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2103315020.tar': No such file or directory
I0903 22:51:59.622403   13104 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2103315020.tar --> /var/lib/minikube/build/build.2103315020.tar (3072 bytes)
I0903 22:51:59.687566   13104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2103315020
I0903 22:51:59.718849   13104 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2103315020 -xf /var/lib/minikube/build/build.2103315020.tar
I0903 22:51:59.736864   13104 docker.go:361] Building image: /var/lib/minikube/build/build.2103315020
I0903 22:51:59.745852   13104 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-228500 /var/lib/minikube/build/build.2103315020
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#4 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:ff877cbbac00d2b89116240a5e7423a664494b1f58649d0003fab42758de4647
#8 writing image sha256:ff877cbbac00d2b89116240a5e7423a664494b1f58649d0003fab42758de4647 done
#8 naming to localhost/my-image:functional-228500 0.0s done
#8 DONE 0.2s
I0903 22:52:03.116994   13104 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-228500 /var/lib/minikube/build/build.2103315020: (3.3709645s)
I0903 22:52:03.132243   13104 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2103315020
I0903 22:52:03.171255   13104 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2103315020.tar
I0903 22:52:03.193237   13104 build_images.go:217] Built localhost/my-image:functional-228500 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2103315020.tar
I0903 22:52:03.193371   13104 build_images.go:133] succeeded building to: functional-228500
I0903 22:52:03.193371   13104 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image ls: (6.9982994s)
E0903 22:52:16.260983    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (27.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.4071955s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-228500
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (21.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image load --daemon kicbase/echo-server:functional-228500 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image load --daemon kicbase/echo-server:functional-228500 --alsologtostderr: (12.4168909s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image ls: (8.7009528s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (21.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 service list
functional_test.go:1469: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 service list: (14.0982026s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 service list -o json: (14.1458058s)
functional_test.go:1504: Took "14.1468236s" to run "out/minikube-windows-amd64.exe -p functional-228500 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image load --daemon kicbase/echo-server:functional-228500 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image load --daemon kicbase/echo-server:functional-228500 --alsologtostderr: (12.2243406s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image ls: (8.6123705s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (21.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-228500
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image load --daemon kicbase/echo-server:functional-228500 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image load --daemon kicbase/echo-server:functional-228500 --alsologtostderr: (11.8971734s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image ls: (8.3699069s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (21.27s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (47.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-228500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-228500"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-228500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-228500": (31.599611s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-228500 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-228500 docker-env | Invoke-Expression ; docker images": (16.0395534s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (47.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image save kicbase/echo-server:functional-228500 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image save kicbase/echo-server:functional-228500 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (9.184076s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image rm kicbase/echo-server:functional-228500 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image rm kicbase/echo-server:functional-228500 --alsologtostderr: (9.1677459s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image ls: (8.6144259s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 update-context --alsologtostderr -v=2
functional_test.go:2124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 update-context --alsologtostderr -v=2: (2.5127739s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 update-context --alsologtostderr -v=2
functional_test.go:2124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 update-context --alsologtostderr -v=2: (2.6590533s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 update-context --alsologtostderr -v=2
functional_test.go:2124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 update-context --alsologtostderr -v=2: (2.5143659s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (8.856699s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image ls
functional_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image ls: (8.6861976s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (14.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1290: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (13.8486295s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (14.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-228500
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-228500 image save --daemon kicbase/echo-server:functional-228500 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-windows-amd64.exe -p functional-228500 image save --daemon kicbase/echo-server:functional-228500 --alsologtostderr: (8.3830045s)
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-228500
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (14.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1325: (dbg) Done: out/minikube-windows-amd64.exe profile list: (14.3588244s)
functional_test.go:1330: Took "14.3588244s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "305.8942ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (14.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-228500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-228500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-228500 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 6148: OpenProcess: The parameter is incorrect.
helpers_test.go:525: unable to kill pid 8792: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-228500 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (14.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1376: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (14.3016499s)
functional_test.go:1381: Took "14.3016499s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "301.1356ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (14.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-228500 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-228500 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [7c0234ae-9903-47fa-96fd-4801c3f788be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [7c0234ae-9903-47fa-96fd-4801c3f788be] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.0583815s
I0903 22:50:44.197586    2220 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-228500 tunnel --alsologtostderr] ...
helpers_test.go:519: unable to terminate pid 8636: Access is denied.
helpers_test.go:525: unable to kill pid 6624: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-228500
--- PASS: TestFunctional/delete_echo-server_images (0.20s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-228500
--- PASS: TestFunctional/delete_my-image_image (0.09s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-228500
--- PASS: TestFunctional/delete_minikube_cached_images (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (746.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=hyperv
E0903 22:58:04.719602    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:04.726135    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:04.738161    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:04.760757    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:04.802436    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:04.884894    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:05.047053    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:05.369166    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:06.011016    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:07.293459    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:09.856427    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:14.978568    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:25.221705    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:58:45.704327    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:59:26.667033    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:00:48.590675    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:00:53.192131    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:03:04.722475    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:03:32.436190    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:05:53.197075    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:08:04.726888    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=hyperv: (11m50.3960138s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 status --alsologtostderr -v 5
E0903 23:08:56.277811    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 status --alsologtostderr -v 5: (36.3178863s)
--- PASS: TestMultiControlPlane/serial/StartCluster (746.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 kubectl -- rollout status deployment/busybox: (5.4835265s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- nslookup kubernetes.io: (1.1922907s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-5cfq2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-c6z29 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 kubectl -- exec busybox-7b57f96db7-lxhhz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (278.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 node add --alsologtostderr -v 5
E0903 23:10:53.200692    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:04.731350    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 node add --alsologtostderr -v 5: (3m50.7362054s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 status --alsologtostderr -v 5
E0903 23:14:27.808238    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 status --alsologtostderr -v 5: (48.069076s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (278.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-270000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (48.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0903 23:15:53.204047    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (48.3921834s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (48.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (629.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 status --output json --alsologtostderr -v 5: (48.1636356s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp testdata\cp-test.txt ha-270000:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp testdata\cp-test.txt ha-270000:/home/docker/cp-test.txt: (9.5120858s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt": (9.5850198s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2830829437\001\cp-test_ha-270000.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2830829437\001\cp-test_ha-270000.txt: (9.5045578s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt": (9.5609075s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000_ha-270000-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000_ha-270000-m02.txt: (16.8580099s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt": (9.5117991s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test_ha-270000_ha-270000-m02.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test_ha-270000_ha-270000-m02.txt": (9.511248s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000:/home/docker/cp-test.txt ha-270000-m03:/home/docker/cp-test_ha-270000_ha-270000-m03.txt
E0903 23:18:04.735483    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000:/home/docker/cp-test.txt ha-270000-m03:/home/docker/cp-test_ha-270000_ha-270000-m03.txt: (16.4372188s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt": (9.6121889s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test_ha-270000_ha-270000-m03.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test_ha-270000_ha-270000-m03.txt": (9.5312315s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000:/home/docker/cp-test.txt ha-270000-m04:/home/docker/cp-test_ha-270000_ha-270000-m04.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000:/home/docker/cp-test.txt ha-270000-m04:/home/docker/cp-test_ha-270000_ha-270000-m04.txt: (17.128872s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt": (9.5337812s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test_ha-270000_ha-270000-m04.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test_ha-270000_ha-270000-m04.txt": (9.6831174s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp testdata\cp-test.txt ha-270000-m02:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp testdata\cp-test.txt ha-270000-m02:/home/docker/cp-test.txt: (9.4798607s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt": (9.509828s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2830829437\001\cp-test_ha-270000-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2830829437\001\cp-test_ha-270000-m02.txt: (9.6370325s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt": (9.3930727s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m02_ha-270000.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m02_ha-270000.txt: (16.5937394s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt": (9.5322995s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test_ha-270000-m02_ha-270000.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test_ha-270000-m02_ha-270000.txt": (9.5620304s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt ha-270000-m03:/home/docker/cp-test_ha-270000-m02_ha-270000-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt ha-270000-m03:/home/docker/cp-test_ha-270000-m02_ha-270000-m03.txt: (16.6545462s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt": (9.5348925s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test_ha-270000-m02_ha-270000-m03.txt"
E0903 23:20:53.208834    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test_ha-270000-m02_ha-270000-m03.txt": (9.4960269s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt ha-270000-m04:/home/docker/cp-test_ha-270000-m02_ha-270000-m04.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt ha-270000-m04:/home/docker/cp-test_ha-270000-m02_ha-270000-m04.txt: (16.8354774s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt": (9.6618797s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test_ha-270000-m02_ha-270000-m04.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test_ha-270000-m02_ha-270000-m04.txt": (9.4936989s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp testdata\cp-test.txt ha-270000-m03:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp testdata\cp-test.txt ha-270000-m03:/home/docker/cp-test.txt: (9.7007498s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt": (9.4313515s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2830829437\001\cp-test_ha-270000-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2830829437\001\cp-test_ha-270000-m03.txt: (9.5770091s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt": (9.4455515s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m03_ha-270000.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m03_ha-270000.txt: (16.6142441s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt": (9.6463948s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000.txt": (9.6529417s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000-m03_ha-270000-m02.txt
E0903 23:23:04.739148    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000-m03_ha-270000-m02.txt: (16.7321752s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt": (9.4751475s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000-m02.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000-m02.txt": (9.5679169s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000-m04:/home/docker/cp-test_ha-270000-m03_ha-270000-m04.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000-m04:/home/docker/cp-test_ha-270000-m03_ha-270000-m04.txt: (16.4616416s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt": (9.4260787s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000-m04.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000-m04.txt": (9.3821931s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp testdata\cp-test.txt ha-270000-m04:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp testdata\cp-test.txt ha-270000-m04:/home/docker/cp-test.txt: (9.5523059s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt": (9.6121442s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2830829437\001\cp-test_ha-270000-m04.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2830829437\001\cp-test_ha-270000-m04.txt: (9.7949624s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt": (9.7325789s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m04_ha-270000.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m04_ha-270000.txt: (16.6894407s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt": (9.3875657s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000.txt": (9.4170962s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000-m04_ha-270000-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000-m04_ha-270000-m02.txt: (16.4887709s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt"
E0903 23:25:36.293394    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt": (9.5366468s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000-m02.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000-m02.txt": (9.3740683s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000-m03:/home/docker/cp-test_ha-270000-m04_ha-270000-m03.txt
E0903 23:25:53.212498    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000-m03:/home/docker/cp-test_ha-270000-m04_ha-270000-m03.txt: (16.333587s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt": (9.4930144s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000-m03.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000-m03.txt": (9.5288249s)
--- PASS: TestMultiControlPlane/serial/CopyFile (629.60s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (194.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-839900 --driver=hyperv
E0903 23:30:53.217422    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:31:07.824609    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-839900 --driver=hyperv: (3m14.5402152s)
--- PASS: TestImageBuild/serial/Setup (194.54s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-839900
E0903 23:33:04.747740    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-839900: (10.6437685s)
--- PASS: TestImageBuild/serial/NormalBuild (10.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-839900
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-839900: (8.7682066s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-839900
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-839900: (8.1482094s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.15s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.35s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-839900
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-839900: (8.3494383s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.35s)

                                                
                                    
x
+
TestJSONOutput/start/Command (225.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-051300 --output=json --user=testUser --memory=3072 --wait=true --driver=hyperv
E0903 23:35:53.222464    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:04.752671    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-051300 --output=json --user=testUser --memory=3072 --wait=true --driver=hyperv: (3m45.6743184s)
--- PASS: TestJSONOutput/start/Command (225.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.89s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-051300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-051300 --output=json --user=testUser: (7.8863635s)
--- PASS: TestJSONOutput/pause/Command (7.89s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-051300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-051300 --output=json --user=testUser: (7.7350159s)
--- PASS: TestJSONOutput/unpause/Command (7.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (38.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-051300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-051300 --output=json --user=testUser: (38.9281986s)
--- PASS: TestJSONOutput/stop/Command (38.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.99s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-504600 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-504600 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (303.0233ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"15eae681-e9d1-4bbc-bc23-c347bc45f60c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-504600] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e81b8ae2-9616-4b89-9a2b-a71d65dad7c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"45188309-e2f7-4907-a835-73d70175ca3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"044f63f1-5cf2-43d0-b001-f5c430b81e78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"429839bb-84be-4a9e-91d9-cb36a85b1163","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21341"}}
	{"specversion":"1.0","id":"d4a1e8b9-6720-4f74-a55d-063d8ad2c178","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a2e2f464-9d61-4653-bf10-b2e6c0f90163","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-504600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-504600
--- PASS: TestErrorJSONOutput (0.99s)

                                                
                                    
x
+
TestMainNoArgs (0.24s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.24s)

                                                
                                    
x
+
TestMinikubeProfile (525.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-394000 --driver=hyperv
E0903 23:40:53.224942    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:16.310036    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-394000 --driver=hyperv: (3m10.7838168s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-394000 --driver=hyperv
E0903 23:43:04.756823    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-394000 --driver=hyperv: (3m14.8396006s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-394000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0903 23:45:53.230041    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (24.3903158s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-394000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (24.3481875s)
helpers_test.go:175: Cleaning up "second-394000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-394000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-394000: (45.8023298s)
helpers_test.go:175: Cleaning up "first-394000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-394000
E0903 23:47:47.841318    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:48:04.760452    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-394000: (44.6153898s)
--- PASS: TestMinikubeProfile (525.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (150.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-933900 --memory=3072 --mount-string C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMountStartserial1258064499\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-933900 --memory=3072 --mount-string C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMountStartserial1258064499\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m29.0767955s)
--- PASS: TestMountStart/serial/StartWithMountFirst (150.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-933900 ssh -- ls /minikube-host
mount_start_test.go:134: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-933900 ssh -- ls /minikube-host: (9.3443871s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (149.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-933900 --memory=3072 --mount-string C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMountStartserial1258064499\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0903 23:50:53.233368    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:53:04.764530    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-933900 --memory=3072 --mount-string C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMountStartserial1258064499\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m28.7995075s)
--- PASS: TestMountStart/serial/StartWithMountSecond (149.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-933900 ssh -- ls /minikube-host
mount_start_test.go:134: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-933900 ssh -- ls /minikube-host: (9.3981733s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (30.09s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-933900 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-933900 --alsologtostderr -v=5: (30.0928746s)
--- PASS: TestMountStart/serial/DeleteFirst (30.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.11s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-933900 ssh -- ls /minikube-host
mount_start_test.go:134: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-933900 ssh -- ls /minikube-host: (9.1138398s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.11s)

                                                
                                    
x
+
TestMountStart/serial/Stop (27.47s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-933900
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-933900: (27.4656623s)
--- PASS: TestMountStart/serial/Stop (27.47s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (114.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-933900
E0903 23:55:53.236830    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-933900: (1m53.836323s)
--- PASS: TestMountStart/serial/RestartStopped (114.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-933900 ssh -- ls /minikube-host
mount_start_test.go:134: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-933900 ssh -- ls /minikube-host: (9.2303243s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (437.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-477700 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=hyperv
E0903 23:58:04.769002    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:58:56.325714    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:00:53.241521    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:03:04.772762    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-477700 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=hyperv: (6m53.7178545s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 status --alsologtostderr: (23.4982714s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (437.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- rollout status deployment/busybox: (4.0900967s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- nslookup kubernetes.io: (1.1337588s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- nslookup kubernetes.default
E0904 00:04:27.857587    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-bj95n -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-477700 -- exec busybox-7b57f96db7-vpdc8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.47s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (235.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-477700 -v=5 --alsologtostderr
E0904 00:05:53.246125    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:08:04.777230    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-477700 -v=5 --alsologtostderr: (3m20.1833398s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 status --alsologtostderr: (35.10429s)
--- PASS: TestMultiNode/serial/AddNode (235.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-477700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (35.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (35.3875014s)
--- PASS: TestMultiNode/serial/ProfileList (35.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (354.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 status --output json --alsologtostderr: (34.8478764s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp testdata\cp-test.txt multinode-477700:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp testdata\cp-test.txt multinode-477700:/home/docker/cp-test.txt: (9.2463325s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test.txt": (9.1018003s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1527156918\001\cp-test_multinode-477700.txt
E0904 00:10:53.249417    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1527156918\001\cp-test_multinode-477700.txt: (9.12943s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test.txt": (9.2196223s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700:/home/docker/cp-test.txt multinode-477700-m02:/home/docker/cp-test_multinode-477700_multinode-477700-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700:/home/docker/cp-test.txt multinode-477700-m02:/home/docker/cp-test_multinode-477700_multinode-477700-m02.txt: (16.4172531s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test.txt": (9.403465s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test_multinode-477700_multinode-477700-m02.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test_multinode-477700_multinode-477700-m02.txt": (9.4242776s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700:/home/docker/cp-test.txt multinode-477700-m03:/home/docker/cp-test_multinode-477700_multinode-477700-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700:/home/docker/cp-test.txt multinode-477700-m03:/home/docker/cp-test_multinode-477700_multinode-477700-m03.txt: (16.0586028s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test.txt": (9.3205892s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test_multinode-477700_multinode-477700-m03.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test_multinode-477700_multinode-477700-m03.txt": (9.2192232s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp testdata\cp-test.txt multinode-477700-m02:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp testdata\cp-test.txt multinode-477700-m02:/home/docker/cp-test.txt: (9.3579549s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test.txt": (9.1006112s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1527156918\001\cp-test_multinode-477700-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1527156918\001\cp-test_multinode-477700-m02.txt: (9.2246256s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test.txt": (9.2083265s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m02:/home/docker/cp-test.txt multinode-477700:/home/docker/cp-test_multinode-477700-m02_multinode-477700.txt
E0904 00:13:04.780747    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m02:/home/docker/cp-test.txt multinode-477700:/home/docker/cp-test_multinode-477700-m02_multinode-477700.txt: (16.1024387s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test.txt": (9.3449373s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test_multinode-477700-m02_multinode-477700.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test_multinode-477700-m02_multinode-477700.txt": (9.4341435s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m02:/home/docker/cp-test.txt multinode-477700-m03:/home/docker/cp-test_multinode-477700-m02_multinode-477700-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m02:/home/docker/cp-test.txt multinode-477700-m03:/home/docker/cp-test_multinode-477700-m02_multinode-477700-m03.txt: (16.0652268s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test.txt": (9.3719183s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test_multinode-477700-m02_multinode-477700-m03.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test_multinode-477700-m02_multinode-477700-m03.txt": (9.3356387s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp testdata\cp-test.txt multinode-477700-m03:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp testdata\cp-test.txt multinode-477700-m03:/home/docker/cp-test.txt: (9.1726088s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test.txt": (9.1923541s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1527156918\001\cp-test_multinode-477700-m03.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile1527156918\001\cp-test_multinode-477700-m03.txt: (9.2919657s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test.txt": (9.251262s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m03:/home/docker/cp-test.txt multinode-477700:/home/docker/cp-test_multinode-477700-m03_multinode-477700.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m03:/home/docker/cp-test.txt multinode-477700:/home/docker/cp-test_multinode-477700-m03_multinode-477700.txt: (16.3469448s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test.txt": (9.2514854s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test_multinode-477700-m03_multinode-477700.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700 "sudo cat /home/docker/cp-test_multinode-477700-m03_multinode-477700.txt": (9.3876906s)
helpers_test.go:573: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m03:/home/docker/cp-test.txt multinode-477700-m02:/home/docker/cp-test_multinode-477700-m03_multinode-477700-m02.txt
helpers_test.go:573: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 cp multinode-477700-m03:/home/docker/cp-test.txt multinode-477700-m02:/home/docker/cp-test_multinode-477700-m03_multinode-477700-m02.txt: (16.4074124s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test.txt"
E0904 00:15:36.341558    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m03 "sudo cat /home/docker/cp-test.txt": (9.2845221s)
helpers_test.go:551: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test_multinode-477700-m03_multinode-477700-m02.txt"
helpers_test.go:551: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 ssh -n multinode-477700-m02 "sudo cat /home/docker/cp-test_multinode-477700-m03_multinode-477700-m02.txt": (9.2633418s)
--- PASS: TestMultiNode/serial/CopyFile (354.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (76.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 node stop m03
E0904 00:15:53.253322    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 node stop m03: (24.7759591s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-477700 status: exit status 7 (26.154835s)

                                                
                                                
-- stdout --
	multinode-477700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-477700-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-477700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-477700 status --alsologtostderr: exit status 7 (25.2914181s)

                                                
                                                
-- stdout --
	multinode-477700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-477700-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-477700-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 00:16:41.419987    4380 out.go:360] Setting OutFile to fd 584 ...
	I0904 00:16:41.497370    4380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 00:16:41.497370    4380 out.go:374] Setting ErrFile to fd 1668...
	I0904 00:16:41.497370    4380 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 00:16:41.513049    4380 out.go:368] Setting JSON to false
	I0904 00:16:41.513049    4380 mustload.go:65] Loading cluster: multinode-477700
	I0904 00:16:41.513049    4380 notify.go:220] Checking for updates...
	I0904 00:16:41.513502    4380 config.go:182] Loaded profile config "multinode-477700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0904 00:16:41.513502    4380 status.go:174] checking status of multinode-477700 ...
	I0904 00:16:41.514814    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:16:43.628470    4380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:16:43.629300    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:16:43.629300    4380 status.go:371] multinode-477700 host status = "Running" (err=<nil>)
	I0904 00:16:43.629412    4380 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:16:43.630214    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:16:45.736742    4380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:16:45.736992    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:16:45.736992    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:16:48.206003    4380 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0904 00:16:48.206003    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:16:48.206003    4380 host.go:66] Checking if "multinode-477700" exists ...
	I0904 00:16:48.218738    4380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 00:16:48.218738    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700 ).state
	I0904 00:16:50.270234    4380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:16:50.271342    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:16:50.271342    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700 ).networkadapters[0]).ipaddresses[0]
	I0904 00:16:52.738762    4380 main.go:141] libmachine: [stdout =====>] : 172.25.126.63
	
	I0904 00:16:52.738762    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:16:52.738762    4380 sshutil.go:53] new ssh client: &{IP:172.25.126.63 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700\id_rsa Username:docker}
	I0904 00:16:52.832861    4380 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6140595s)
	I0904 00:16:52.845828    4380 ssh_runner.go:195] Run: systemctl --version
	I0904 00:16:52.869771    4380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 00:16:52.903768    4380 kubeconfig.go:125] found "multinode-477700" server: "https://172.25.126.63:8443"
	I0904 00:16:52.903975    4380 api_server.go:166] Checking apiserver status ...
	I0904 00:16:52.917161    4380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 00:16:52.964956    4380 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2591/cgroup
	W0904 00:16:52.988349    4380 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2591/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 00:16:53.000684    4380 ssh_runner.go:195] Run: ls
	I0904 00:16:53.009743    4380 api_server.go:253] Checking apiserver healthz at https://172.25.126.63:8443/healthz ...
	I0904 00:16:53.017251    4380 api_server.go:279] https://172.25.126.63:8443/healthz returned 200:
	ok
	I0904 00:16:53.017363    4380 status.go:463] multinode-477700 apiserver status = Running (err=<nil>)
	I0904 00:16:53.017363    4380 status.go:176] multinode-477700 status: &{Name:multinode-477700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 00:16:53.017363    4380 status.go:174] checking status of multinode-477700-m02 ...
	I0904 00:16:53.018070    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:16:55.061417    4380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:16:55.061417    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:16:55.061417    4380 status.go:371] multinode-477700-m02 host status = "Running" (err=<nil>)
	I0904 00:16:55.061417    4380 host.go:66] Checking if "multinode-477700-m02" exists ...
	I0904 00:16:55.062289    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:16:57.202245    4380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:16:57.203074    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:16:57.203445    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:16:59.676262    4380 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:16:59.676262    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:16:59.676262    4380 host.go:66] Checking if "multinode-477700-m02" exists ...
	I0904 00:16:59.688981    4380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 00:16:59.688981    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m02 ).state
	I0904 00:17:01.813031    4380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0904 00:17:01.813802    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:17:01.813887    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-477700-m02 ).networkadapters[0]).ipaddresses[0]
	I0904 00:17:04.343326    4380 main.go:141] libmachine: [stdout =====>] : 172.25.125.181
	
	I0904 00:17:04.343326    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:17:04.343326    4380 sshutil.go:53] new ssh client: &{IP:172.25.125.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-477700-m02\id_rsa Username:docker}
	I0904 00:17:04.447248    4380 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7582013s)
	I0904 00:17:04.460790    4380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 00:17:04.489009    4380 status.go:176] multinode-477700-m02 status: &{Name:multinode-477700-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0904 00:17:04.489112    4380 status.go:174] checking status of multinode-477700-m03 ...
	I0904 00:17:04.490255    4380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-477700-m03 ).state
	I0904 00:17:06.557038    4380 main.go:141] libmachine: [stdout =====>] : Off
	
	I0904 00:17:06.557969    4380 main.go:141] libmachine: [stderr =====>] : 
	I0904 00:17:06.558045    4380 status.go:371] multinode-477700-m03 host status = "Stopped" (err=<nil>)
	I0904 00:17:06.558045    4380 status.go:384] host is not running, skipping remaining checks
	I0904 00:17:06.558083    4380 status.go:176] multinode-477700-m03 status: &{Name:multinode-477700-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (76.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (188.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 node start m03 -v=5 --alsologtostderr
E0904 00:18:04.784599    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 node start m03 -v=5 --alsologtostderr: (2m33.04471s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-477700 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-477700 status -v=5 --alsologtostderr: (34.8510477s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (188.07s)

                                                
                                    
x
+
TestPreload (522.86s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-169600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0904 00:30:53.265715    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:32:16.357976    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:33:04.797402    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-169600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m30.7140566s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-169600 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-169600 image pull gcr.io/k8s-minikube/busybox: (9.0748415s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-169600
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-169600: (39.7570764s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-169600 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0904 00:35:53.269841    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-169600 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m34.7309723s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-169600 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-169600 image list: (7.1932102s)
helpers_test.go:175: Cleaning up "test-preload-169600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-169600
E0904 00:37:47.890956    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:38:04.800915    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-169600: (41.3918363s)
--- PASS: TestPreload (522.86s)

                                                
                                    
x
+
TestScheduledStopWindows (323.4s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-383300 --memory=3072 --driver=hyperv
E0904 00:40:53.274976    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-383300 --memory=3072 --driver=hyperv: (3m11.8336695s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-383300 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-383300 --schedule 5m: (10.6540585s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-383300 -n scheduled-stop-383300
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-383300 -n scheduled-stop-383300: exit status 1 (10.0160628s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-383300 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-383300 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.3157692s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-383300 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-383300 --schedule 5s: (10.6062054s)
E0904 00:43:04.805884    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-383300
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-383300: exit status 7 (2.3618976s)

                                                
                                                
-- stdout --
	scheduled-stop-383300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-383300 -n scheduled-stop-383300
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-383300 -n scheduled-stop-383300: exit status 7 (2.3086926s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-383300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-383300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-383300: (26.2944926s)
--- PASS: TestScheduledStopWindows (323.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (936.82s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2606681994.exe start -p running-upgrade-858500 --memory=3072 --vm-driver=hyperv
E0904 00:50:53.282077    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2606681994.exe start -p running-upgrade-858500 --memory=3072 --vm-driver=hyperv: (7m34.9540014s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-858500 --memory=3072 --alsologtostderr -v=1 --driver=hyperv
E0904 00:58:04.818145    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-858500 --memory=3072 --alsologtostderr -v=1 --driver=hyperv: (6m0.4756385s)
helpers_test.go:175: Cleaning up "running-upgrade-858500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-858500
helpers_test.go:178: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p running-upgrade-858500: exit status 1 (2m0.0104293s)

                                                
                                                
-- stdout --
	* Stopping node "running-upgrade-858500"  ...
	* Powering off "running-upgrade-858500" via SSH ...

                                                
                                                
-- /stdout --
helpers_test.go:180: failed cleanup: exit status 1
--- PASS: TestRunningBinaryUpgrade (936.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (1287.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-143600 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-143600 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (3m22.9081587s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-143600
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-143600: (39.1186689s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-143600 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-143600 status --format={{.Host}}: exit status 7 (2.4286221s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-143600 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=hyperv
E0904 00:48:04.809715    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:48:56.374719    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-143600 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=hyperv: (11m55.4229375s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-143600 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-143600 --memory=3072 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-143600 --memory=3072 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (316.4004ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-143600] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-143600
	    minikube start -p kubernetes-upgrade-143600 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1436002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-143600 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-143600 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-143600 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=hyperv: (4m44.4637412s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-143600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-143600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-143600: (42.5371451s)
--- PASS: TestKubernetesUpgrade (1287.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1089.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3641142933.exe start -p stopped-upgrade-326200 --memory=3072 --vm-driver=hyperv
E0904 00:45:53.278257    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3641142933.exe start -p stopped-upgrade-326200 --memory=3072 --vm-driver=hyperv: (9m56.7451882s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3641142933.exe -p stopped-upgrade-326200 stop
E0904 00:54:27.906779    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.3641142933.exe -p stopped-upgrade-326200 stop: (34.301756s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-326200 --memory=3072 --alsologtostderr -v=1 --driver=hyperv
E0904 00:55:53.287020    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-326200 --memory=3072 --alsologtostderr -v=1 --driver=hyperv: (7m38.4025536s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (1089.45s)

                                                
                                    
x
+
TestPause/serial/Start (460.66s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-590700 --memory=3072 --install-addons=false --wait=all --driver=hyperv
E0904 00:53:04.813952    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-228500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-590700 --memory=3072 --install-addons=false --wait=all --driver=hyperv: (7m40.6547167s)
--- PASS: TestPause/serial/Start (460.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (298.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-590700 --alsologtostderr -v=1 --driver=hyperv
E0904 01:00:53.290359    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-590700 --alsologtostderr -v=1 --driver=hyperv: (4m58.0163225s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (298.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-326200
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-326200: (10.1470177s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-686800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-686800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (425.7003ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-686800] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (279.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-686800 --memory=3072 --alsologtostderr -v=5 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-686800 --memory=3072 --alsologtostderr -v=5 --driver=hyperv: (4m27.294161s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-686800 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-686800 status -o json: (12.4340107s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (279.73s)

                                                
                                    
x
+
TestPause/serial/Pause (8.11s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-590700 --alsologtostderr -v=5
E0904 01:05:36.391461    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-590700 --alsologtostderr -v=5: (8.1062037s)
--- PASS: TestPause/serial/Pause (8.11s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (12.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-590700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-590700 --output=json --layout=cluster: exit status 2 (12.3752692s)

                                                
                                                
-- stdout --
	{"Name":"pause-590700","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-590700","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (12.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (9.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-590700 --alsologtostderr -v=5
E0904 01:05:53.294636    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-590700 --alsologtostderr -v=5: (9.5188057s)
--- PASS: TestPause/serial/Unpause (9.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (9.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-590700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-590700 --alsologtostderr -v=5: (9.0247616s)
--- PASS: TestPause/serial/PauseAgain (9.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (48.07s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-590700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-590700 --alsologtostderr -v=5: (48.0674246s)
--- PASS: TestPause/serial/DeletePaused (48.07s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (17.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (17.6068557s)
--- PASS: TestPause/serial/VerifyDeletedResources (17.61s)

                                                
                                    

Test skip (33/212)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-228500 --alsologtostderr -v=1]
E0903 22:50:53.183772    2220 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-933200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-228500 --alsologtostderr -v=1] ...
helpers_test.go:519: unable to terminate pid 8256: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-228500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-228500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0273943s)

                                                
                                                
-- stdout --
	* [functional-228500] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 22:50:31.196516    3880 out.go:360] Setting OutFile to fd 1376 ...
	I0903 22:50:31.274646    3880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:50:31.274646    3880 out.go:374] Setting ErrFile to fd 696...
	I0903 22:50:31.274856    3880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:50:31.298928    3880 out.go:368] Setting JSON to false
	I0903 22:50:31.302419    3880 start.go:130] hostinfo: {"hostname":"minikube6","uptime":22936,"bootTime":1756916894,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0903 22:50:31.302419    3880 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0903 22:50:31.310068    3880 out.go:179] * [functional-228500] minikube v1.36.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0903 22:50:31.319055    3880 notify.go:220] Checking for updates...
	I0903 22:50:31.321813    3880 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0903 22:50:31.326719    3880 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 22:50:31.331823    3880 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0903 22:50:31.341912    3880 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 22:50:31.346919    3880 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 22:50:31.351918    3880 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 22:50:31.352916    3880 driver.go:421] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:995: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-228500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-228500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0252348s)

                                                
                                                
-- stdout --
	* [functional-228500] minikube v1.36.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 22:50:36.260853    8984 out.go:360] Setting OutFile to fd 1692 ...
	I0903 22:50:36.353810    8984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:50:36.353810    8984 out.go:374] Setting ErrFile to fd 1696...
	I0903 22:50:36.353810    8984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:50:36.373788    8984 out.go:368] Setting JSON to false
	I0903 22:50:36.377792    8984 start.go:130] hostinfo: {"hostname":"minikube6","uptime":22941,"bootTime":1756916894,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6282 Build 19045.6282","kernelVersion":"10.0.19045.6282 Build 19045.6282","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0903 22:50:36.377792    8984 start.go:138] gopshost.Virtualization returned error: not implemented yet
	I0903 22:50:36.381822    8984 out.go:179] * [functional-228500] minikube v1.36.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6282 Build 19045.6282
	I0903 22:50:36.387801    8984 notify.go:220] Checking for updates...
	I0903 22:50:36.391814    8984 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0903 22:50:36.395783    8984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 22:50:36.398834    8984 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0903 22:50:36.402813    8984 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 22:50:36.410781    8984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 22:50:36.416826    8984 config.go:182] Loaded profile config "functional-228500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0903 22:50:36.417814    8984 driver.go:421] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1040: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard